Test Report: Docker_Linux_crio_arm64 22332

                    
                      56e1ce855180c73f84c0d958e6323d58f60b3065:2025-12-27:43013
                    
                

Test fail (35/332)

Order failed test Duration
29 TestAddons/serial/Volcano 0.27
35 TestAddons/parallel/Registry 14.53
36 TestAddons/parallel/RegistryCreds 0.5
37 TestAddons/parallel/Ingress 8.53
38 TestAddons/parallel/InspektorGadget 5.26
39 TestAddons/parallel/MetricsServer 5.36
41 TestAddons/parallel/CSI 32.98
42 TestAddons/parallel/Headlamp 3.38
43 TestAddons/parallel/CloudSpanner 5.34
44 TestAddons/parallel/LocalPath 7.47
45 TestAddons/parallel/NvidiaDevicePlugin 6.31
46 TestAddons/parallel/Yakd 6.27
52 TestForceSystemdFlag 508.5
53 TestForceSystemdEnv 505.4
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 512.67
174 TestMultiControlPlane/serial/DeleteSecondaryNode 5.1
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 4.96
176 TestMultiControlPlane/serial/StopCluster 14.18
177 TestMultiControlPlane/serial/RestartCluster 85.78
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 3.75
179 TestMultiControlPlane/serial/AddSecondaryNode 85.95
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 4.31
191 TestJSONOutput/pause/Command 2.43
197 TestJSONOutput/unpause/Command 1.99
261 TestPause/serial/Pause 9.56
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.42
307 TestStartStop/group/old-k8s-version/serial/Pause 6
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.55
318 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.33
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.32
329 TestStartStop/group/embed-certs/serial/Pause 6.74
335 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.97
340 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.6
345 TestStartStop/group/newest-cni/serial/Pause 5.74
357 TestStartStop/group/no-preload/serial/Pause 6.57
x
+
TestAddons/serial/Volcano (0.27s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-686526 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-686526 addons disable volcano --alsologtostderr -v=1: exit status 11 (270.56688ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:57:53.447116  281055 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:57:53.448731  281055 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:53.448753  281055 out.go:374] Setting ErrFile to fd 2...
	I1227 19:57:53.448792  281055 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:57:53.449087  281055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 19:57:53.449400  281055 mustload.go:66] Loading cluster: addons-686526
	I1227 19:57:53.449824  281055 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:53.449847  281055 addons.go:622] checking whether the cluster is paused
	I1227 19:57:53.449965  281055 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:57:53.449975  281055 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:57:53.450515  281055 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:57:53.466840  281055 ssh_runner.go:195] Run: systemctl --version
	I1227 19:57:53.466902  281055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:57:53.482637  281055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:57:53.584101  281055 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:57:53.584197  281055 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:57:53.612584  281055 cri.go:96] found id: "f7d0de1b69961a288553903b4992c35c22e5e36f33340aca5549666ccae62780"
	I1227 19:57:53.612605  281055 cri.go:96] found id: "aeb61d6ca819d545ff012b4bb79b4b12a2d3e4fd7c017b8aff9b3f73e7e4fd72"
	I1227 19:57:53.612609  281055 cri.go:96] found id: "76e4419cf17d2bcb37cb37c985de6da6879900fefbf8d8753bdc6f8b601a7b70"
	I1227 19:57:53.612613  281055 cri.go:96] found id: "8628a6c29088613cf486c32e6965fba1763bf29fdb83553c63ccc780c26e53e1"
	I1227 19:57:53.612616  281055 cri.go:96] found id: "8b5869c0f3a2d08ab29fe49e15a8657d278dcd90725095d352af4569975b4092"
	I1227 19:57:53.612620  281055 cri.go:96] found id: "4bfa93abefaefe59e582f297d87729a6eae0fb1a9a0a16200706276a345bd9f5"
	I1227 19:57:53.612623  281055 cri.go:96] found id: "db248b3a0ad2c188ffc0bd6285c801a114ef2a163b57e80967eed7058d10b079"
	I1227 19:57:53.612630  281055 cri.go:96] found id: "14cd7f4eeeb99b20541874eb2f68a4348b74b68c609a377d4e92df28a82336e2"
	I1227 19:57:53.612633  281055 cri.go:96] found id: "4060cf97180d981d325d0d9e6eb59ef66cf733a447793c3c3c19354ffe8cc564"
	I1227 19:57:53.612640  281055 cri.go:96] found id: "63f13e1a463440ef808022bc00191127d600efb20f8a410beeafe0ff3eba5e18"
	I1227 19:57:53.612644  281055 cri.go:96] found id: "76dabe108a8cdf25e513799e72a1701938261cabbaf7677deb7cf44b74e6693e"
	I1227 19:57:53.612647  281055 cri.go:96] found id: "5069e3ca60ffbe2dae6fb5bf95131972cc927b0230086e1374f3aa33984f9a66"
	I1227 19:57:53.612650  281055 cri.go:96] found id: "59198603f9dcd704e4c9bf1e3690d726408d5fbe97ca91fbee22d027956132a4"
	I1227 19:57:53.612654  281055 cri.go:96] found id: "6b3af5ed669f8def1398487b65fab3dc84efc5016bc4e43413569ae9cf491fae"
	I1227 19:57:53.612657  281055 cri.go:96] found id: "78bf49f8848895542e9d07cd088de90af51710dcb99e3756d7a1ae5577d88b11"
	I1227 19:57:53.612664  281055 cri.go:96] found id: "affa3c0ed51244e760e065712febf0f9b147fb070147eea321d9eccfb748d170"
	I1227 19:57:53.612667  281055 cri.go:96] found id: "2d16e4494d6c0f9e5b1eff95ad99704e381224b2dd00e0b326a3c2f5fdbe920c"
	I1227 19:57:53.612671  281055 cri.go:96] found id: "acf0f77c3c7122dd9f3b2603143da9d067c290830c8d6e96ae769b64065a6f69"
	I1227 19:57:53.612674  281055 cri.go:96] found id: "527b6bec92865051022b476e87f4a56edd36b846695cf95de5df71efbc3328fd"
	I1227 19:57:53.612677  281055 cri.go:96] found id: "f6bd3ae635b96e69ae3cbe10aa5433f6dad9b05cc38a83cac219a431275bfa26"
	I1227 19:57:53.612682  281055 cri.go:96] found id: "665372886f2f5a56019a7acc4aaba64773a2800425add6614a10ed8d31727212"
	I1227 19:57:53.612689  281055 cri.go:96] found id: "46d7e3e9cbbf8984d7ddb16c6a136272c405ce68b4de5cfb0b73b424a67b97ba"
	I1227 19:57:53.612693  281055 cri.go:96] found id: "2339b22f5aa5ad86cbc68a8ee3d73f387563e2d32cb08c5b0ebd3fe231f755bf"
	I1227 19:57:53.612696  281055 cri.go:96] found id: ""
	I1227 19:57:53.612743  281055 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:57:53.626707  281055 out.go:203] 
	W1227 19:57:53.629573  281055 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:57:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:57:53.629601  281055 out.go:285] * 
	* 
	W1227 19:57:53.632704  281055 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:57:53.635939  281055 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-686526 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.27s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 6.137101ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-6s25q" [f0b1919e-0cc3-4360-be8f-4ce4ccfcb1b4] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002956611s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-x4f62" [8d1ee49f-0706-4da1-bb4a-b08c535e2797] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003804584s
addons_test.go:394: (dbg) Run:  kubectl --context addons-686526 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-686526 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-686526 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.944960149s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-686526 ip
2025/12/27 19:58:18 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-686526 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-686526 addons disable registry --alsologtostderr -v=1: exit status 11 (249.431076ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:58:18.187806  281586 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:58:18.188504  281586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:18.188518  281586 out.go:374] Setting ErrFile to fd 2...
	I1227 19:58:18.188523  281586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:18.188791  281586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 19:58:18.189085  281586 mustload.go:66] Loading cluster: addons-686526
	I1227 19:58:18.190052  281586 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:18.190134  281586 addons.go:622] checking whether the cluster is paused
	I1227 19:58:18.190354  281586 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:18.190395  281586 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:58:18.191173  281586 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:58:18.210437  281586 ssh_runner.go:195] Run: systemctl --version
	I1227 19:58:18.210501  281586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:58:18.227955  281586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:58:18.332623  281586 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:58:18.332739  281586 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:58:18.361287  281586 cri.go:96] found id: "f7d0de1b69961a288553903b4992c35c22e5e36f33340aca5549666ccae62780"
	I1227 19:58:18.361311  281586 cri.go:96] found id: "aeb61d6ca819d545ff012b4bb79b4b12a2d3e4fd7c017b8aff9b3f73e7e4fd72"
	I1227 19:58:18.361316  281586 cri.go:96] found id: "76e4419cf17d2bcb37cb37c985de6da6879900fefbf8d8753bdc6f8b601a7b70"
	I1227 19:58:18.361320  281586 cri.go:96] found id: "8628a6c29088613cf486c32e6965fba1763bf29fdb83553c63ccc780c26e53e1"
	I1227 19:58:18.361323  281586 cri.go:96] found id: "8b5869c0f3a2d08ab29fe49e15a8657d278dcd90725095d352af4569975b4092"
	I1227 19:58:18.361327  281586 cri.go:96] found id: "4bfa93abefaefe59e582f297d87729a6eae0fb1a9a0a16200706276a345bd9f5"
	I1227 19:58:18.361330  281586 cri.go:96] found id: "db248b3a0ad2c188ffc0bd6285c801a114ef2a163b57e80967eed7058d10b079"
	I1227 19:58:18.361354  281586 cri.go:96] found id: "14cd7f4eeeb99b20541874eb2f68a4348b74b68c609a377d4e92df28a82336e2"
	I1227 19:58:18.361363  281586 cri.go:96] found id: "4060cf97180d981d325d0d9e6eb59ef66cf733a447793c3c3c19354ffe8cc564"
	I1227 19:58:18.361371  281586 cri.go:96] found id: "63f13e1a463440ef808022bc00191127d600efb20f8a410beeafe0ff3eba5e18"
	I1227 19:58:18.361375  281586 cri.go:96] found id: "76dabe108a8cdf25e513799e72a1701938261cabbaf7677deb7cf44b74e6693e"
	I1227 19:58:18.361379  281586 cri.go:96] found id: "5069e3ca60ffbe2dae6fb5bf95131972cc927b0230086e1374f3aa33984f9a66"
	I1227 19:58:18.361382  281586 cri.go:96] found id: "59198603f9dcd704e4c9bf1e3690d726408d5fbe97ca91fbee22d027956132a4"
	I1227 19:58:18.361385  281586 cri.go:96] found id: "6b3af5ed669f8def1398487b65fab3dc84efc5016bc4e43413569ae9cf491fae"
	I1227 19:58:18.361388  281586 cri.go:96] found id: "78bf49f8848895542e9d07cd088de90af51710dcb99e3756d7a1ae5577d88b11"
	I1227 19:58:18.361395  281586 cri.go:96] found id: "affa3c0ed51244e760e065712febf0f9b147fb070147eea321d9eccfb748d170"
	I1227 19:58:18.361398  281586 cri.go:96] found id: "2d16e4494d6c0f9e5b1eff95ad99704e381224b2dd00e0b326a3c2f5fdbe920c"
	I1227 19:58:18.361402  281586 cri.go:96] found id: "acf0f77c3c7122dd9f3b2603143da9d067c290830c8d6e96ae769b64065a6f69"
	I1227 19:58:18.361411  281586 cri.go:96] found id: "527b6bec92865051022b476e87f4a56edd36b846695cf95de5df71efbc3328fd"
	I1227 19:58:18.361414  281586 cri.go:96] found id: "f6bd3ae635b96e69ae3cbe10aa5433f6dad9b05cc38a83cac219a431275bfa26"
	I1227 19:58:18.361434  281586 cri.go:96] found id: "665372886f2f5a56019a7acc4aaba64773a2800425add6614a10ed8d31727212"
	I1227 19:58:18.361468  281586 cri.go:96] found id: "46d7e3e9cbbf8984d7ddb16c6a136272c405ce68b4de5cfb0b73b424a67b97ba"
	I1227 19:58:18.361472  281586 cri.go:96] found id: "2339b22f5aa5ad86cbc68a8ee3d73f387563e2d32cb08c5b0ebd3fe231f755bf"
	I1227 19:58:18.361494  281586 cri.go:96] found id: ""
	I1227 19:58:18.361550  281586 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:58:18.376115  281586 out.go:203] 
	W1227 19:58:18.379133  281586 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:58:18.379160  281586 out.go:285] * 
	* 
	W1227 19:58:18.382119  281586 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:58:18.385194  281586 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-686526 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.53s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.5s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 5.298503ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-686526
addons_test.go:334: (dbg) Run:  kubectl --context addons-686526 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-686526 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-686526 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (265.361132ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:58:46.535255  283382 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:58:46.535971  283382 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:46.535984  283382 out.go:374] Setting ErrFile to fd 2...
	I1227 19:58:46.535989  283382 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:46.536261  283382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 19:58:46.536582  283382 mustload.go:66] Loading cluster: addons-686526
	I1227 19:58:46.536967  283382 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:46.536990  283382 addons.go:622] checking whether the cluster is paused
	I1227 19:58:46.537120  283382 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:46.537136  283382 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:58:46.537702  283382 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:58:46.554413  283382 ssh_runner.go:195] Run: systemctl --version
	I1227 19:58:46.554475  283382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:58:46.571129  283382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:58:46.672252  283382 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:58:46.672338  283382 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:58:46.712528  283382 cri.go:96] found id: "f7d0de1b69961a288553903b4992c35c22e5e36f33340aca5549666ccae62780"
	I1227 19:58:46.712550  283382 cri.go:96] found id: "aeb61d6ca819d545ff012b4bb79b4b12a2d3e4fd7c017b8aff9b3f73e7e4fd72"
	I1227 19:58:46.712555  283382 cri.go:96] found id: "76e4419cf17d2bcb37cb37c985de6da6879900fefbf8d8753bdc6f8b601a7b70"
	I1227 19:58:46.712559  283382 cri.go:96] found id: "8628a6c29088613cf486c32e6965fba1763bf29fdb83553c63ccc780c26e53e1"
	I1227 19:58:46.712564  283382 cri.go:96] found id: "8b5869c0f3a2d08ab29fe49e15a8657d278dcd90725095d352af4569975b4092"
	I1227 19:58:46.712568  283382 cri.go:96] found id: "4bfa93abefaefe59e582f297d87729a6eae0fb1a9a0a16200706276a345bd9f5"
	I1227 19:58:46.712571  283382 cri.go:96] found id: "db248b3a0ad2c188ffc0bd6285c801a114ef2a163b57e80967eed7058d10b079"
	I1227 19:58:46.712574  283382 cri.go:96] found id: "14cd7f4eeeb99b20541874eb2f68a4348b74b68c609a377d4e92df28a82336e2"
	I1227 19:58:46.712577  283382 cri.go:96] found id: "4060cf97180d981d325d0d9e6eb59ef66cf733a447793c3c3c19354ffe8cc564"
	I1227 19:58:46.712583  283382 cri.go:96] found id: "63f13e1a463440ef808022bc00191127d600efb20f8a410beeafe0ff3eba5e18"
	I1227 19:58:46.712586  283382 cri.go:96] found id: "76dabe108a8cdf25e513799e72a1701938261cabbaf7677deb7cf44b74e6693e"
	I1227 19:58:46.712601  283382 cri.go:96] found id: "5069e3ca60ffbe2dae6fb5bf95131972cc927b0230086e1374f3aa33984f9a66"
	I1227 19:58:46.712611  283382 cri.go:96] found id: "59198603f9dcd704e4c9bf1e3690d726408d5fbe97ca91fbee22d027956132a4"
	I1227 19:58:46.712614  283382 cri.go:96] found id: "6b3af5ed669f8def1398487b65fab3dc84efc5016bc4e43413569ae9cf491fae"
	I1227 19:58:46.712617  283382 cri.go:96] found id: "78bf49f8848895542e9d07cd088de90af51710dcb99e3756d7a1ae5577d88b11"
	I1227 19:58:46.712622  283382 cri.go:96] found id: "affa3c0ed51244e760e065712febf0f9b147fb070147eea321d9eccfb748d170"
	I1227 19:58:46.712625  283382 cri.go:96] found id: "2d16e4494d6c0f9e5b1eff95ad99704e381224b2dd00e0b326a3c2f5fdbe920c"
	I1227 19:58:46.712635  283382 cri.go:96] found id: "acf0f77c3c7122dd9f3b2603143da9d067c290830c8d6e96ae769b64065a6f69"
	I1227 19:58:46.712642  283382 cri.go:96] found id: "527b6bec92865051022b476e87f4a56edd36b846695cf95de5df71efbc3328fd"
	I1227 19:58:46.712645  283382 cri.go:96] found id: "f6bd3ae635b96e69ae3cbe10aa5433f6dad9b05cc38a83cac219a431275bfa26"
	I1227 19:58:46.712650  283382 cri.go:96] found id: "665372886f2f5a56019a7acc4aaba64773a2800425add6614a10ed8d31727212"
	I1227 19:58:46.712653  283382 cri.go:96] found id: "46d7e3e9cbbf8984d7ddb16c6a136272c405ce68b4de5cfb0b73b424a67b97ba"
	I1227 19:58:46.712657  283382 cri.go:96] found id: "2339b22f5aa5ad86cbc68a8ee3d73f387563e2d32cb08c5b0ebd3fe231f755bf"
	I1227 19:58:46.712660  283382 cri.go:96] found id: ""
	I1227 19:58:46.712707  283382 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:58:46.727098  283382 out.go:203] 
	W1227 19:58:46.729906  283382 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:58:46.729928  283382 out.go:285] * 
	* 
	W1227 19:58:46.732850  283382 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:58:46.735763  283382 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-686526 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.50s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (8.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-686526 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-686526 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-686526 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [14c92731-0261-4b07-8b6e-44704adb173a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [14c92731-0261-4b07-8b6e-44704adb173a] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 6.006821451s
I1227 19:58:44.405568  274336 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-686526 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-686526 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-686526 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-686526 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-686526 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (383.163842ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:58:45.747438  283254 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:58:45.748438  283254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:45.748476  283254 out.go:374] Setting ErrFile to fd 2...
	I1227 19:58:45.748498  283254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:45.748806  283254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 19:58:45.749164  283254 mustload.go:66] Loading cluster: addons-686526
	I1227 19:58:45.749666  283254 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:45.749713  283254 addons.go:622] checking whether the cluster is paused
	I1227 19:58:45.749860  283254 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:45.749892  283254 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:58:45.750487  283254 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:58:45.791950  283254 ssh_runner.go:195] Run: systemctl --version
	I1227 19:58:45.792005  283254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:58:45.815535  283254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:58:45.921160  283254 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:58:45.921248  283254 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:58:45.968742  283254 cri.go:96] found id: "f7d0de1b69961a288553903b4992c35c22e5e36f33340aca5549666ccae62780"
	I1227 19:58:45.968760  283254 cri.go:96] found id: "aeb61d6ca819d545ff012b4bb79b4b12a2d3e4fd7c017b8aff9b3f73e7e4fd72"
	I1227 19:58:45.968765  283254 cri.go:96] found id: "76e4419cf17d2bcb37cb37c985de6da6879900fefbf8d8753bdc6f8b601a7b70"
	I1227 19:58:45.968769  283254 cri.go:96] found id: "8628a6c29088613cf486c32e6965fba1763bf29fdb83553c63ccc780c26e53e1"
	I1227 19:58:45.968781  283254 cri.go:96] found id: "8b5869c0f3a2d08ab29fe49e15a8657d278dcd90725095d352af4569975b4092"
	I1227 19:58:45.968785  283254 cri.go:96] found id: "4bfa93abefaefe59e582f297d87729a6eae0fb1a9a0a16200706276a345bd9f5"
	I1227 19:58:45.968788  283254 cri.go:96] found id: "db248b3a0ad2c188ffc0bd6285c801a114ef2a163b57e80967eed7058d10b079"
	I1227 19:58:45.968791  283254 cri.go:96] found id: "14cd7f4eeeb99b20541874eb2f68a4348b74b68c609a377d4e92df28a82336e2"
	I1227 19:58:45.968794  283254 cri.go:96] found id: "4060cf97180d981d325d0d9e6eb59ef66cf733a447793c3c3c19354ffe8cc564"
	I1227 19:58:45.968800  283254 cri.go:96] found id: "63f13e1a463440ef808022bc00191127d600efb20f8a410beeafe0ff3eba5e18"
	I1227 19:58:45.968803  283254 cri.go:96] found id: "76dabe108a8cdf25e513799e72a1701938261cabbaf7677deb7cf44b74e6693e"
	I1227 19:58:45.968815  283254 cri.go:96] found id: "5069e3ca60ffbe2dae6fb5bf95131972cc927b0230086e1374f3aa33984f9a66"
	I1227 19:58:45.968818  283254 cri.go:96] found id: "59198603f9dcd704e4c9bf1e3690d726408d5fbe97ca91fbee22d027956132a4"
	I1227 19:58:45.968821  283254 cri.go:96] found id: "6b3af5ed669f8def1398487b65fab3dc84efc5016bc4e43413569ae9cf491fae"
	I1227 19:58:45.968824  283254 cri.go:96] found id: "78bf49f8848895542e9d07cd088de90af51710dcb99e3756d7a1ae5577d88b11"
	I1227 19:58:45.968829  283254 cri.go:96] found id: "affa3c0ed51244e760e065712febf0f9b147fb070147eea321d9eccfb748d170"
	I1227 19:58:45.968837  283254 cri.go:96] found id: "2d16e4494d6c0f9e5b1eff95ad99704e381224b2dd00e0b326a3c2f5fdbe920c"
	I1227 19:58:45.968841  283254 cri.go:96] found id: "acf0f77c3c7122dd9f3b2603143da9d067c290830c8d6e96ae769b64065a6f69"
	I1227 19:58:45.968844  283254 cri.go:96] found id: "527b6bec92865051022b476e87f4a56edd36b846695cf95de5df71efbc3328fd"
	I1227 19:58:45.968847  283254 cri.go:96] found id: "f6bd3ae635b96e69ae3cbe10aa5433f6dad9b05cc38a83cac219a431275bfa26"
	I1227 19:58:45.968851  283254 cri.go:96] found id: "665372886f2f5a56019a7acc4aaba64773a2800425add6614a10ed8d31727212"
	I1227 19:58:45.968854  283254 cri.go:96] found id: "46d7e3e9cbbf8984d7ddb16c6a136272c405ce68b4de5cfb0b73b424a67b97ba"
	I1227 19:58:45.968856  283254 cri.go:96] found id: "2339b22f5aa5ad86cbc68a8ee3d73f387563e2d32cb08c5b0ebd3fe231f755bf"
	I1227 19:58:45.968859  283254 cri.go:96] found id: ""
	I1227 19:58:45.968906  283254 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:58:45.987371  283254 out.go:203] 
	W1227 19:58:45.989856  283254 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:58:45.989890  283254 out.go:285] * 
	* 
	W1227 19:58:45.992798  283254 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:58:45.995927  283254 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-686526 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-686526 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-686526 addons disable ingress --alsologtostderr -v=1: exit status 11 (249.116846ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:58:46.048243  283325 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:58:46.048930  283325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:46.048949  283325 out.go:374] Setting ErrFile to fd 2...
	I1227 19:58:46.048955  283325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:46.049225  283325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 19:58:46.049625  283325 mustload.go:66] Loading cluster: addons-686526
	I1227 19:58:46.050018  283325 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:46.050042  283325 addons.go:622] checking whether the cluster is paused
	I1227 19:58:46.050151  283325 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:46.050167  283325 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:58:46.050707  283325 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:58:46.068512  283325 ssh_runner.go:195] Run: systemctl --version
	I1227 19:58:46.068590  283325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:58:46.092296  283325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:58:46.191784  283325 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:58:46.191894  283325 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:58:46.219730  283325 cri.go:96] found id: "f7d0de1b69961a288553903b4992c35c22e5e36f33340aca5549666ccae62780"
	I1227 19:58:46.219755  283325 cri.go:96] found id: "aeb61d6ca819d545ff012b4bb79b4b12a2d3e4fd7c017b8aff9b3f73e7e4fd72"
	I1227 19:58:46.219760  283325 cri.go:96] found id: "76e4419cf17d2bcb37cb37c985de6da6879900fefbf8d8753bdc6f8b601a7b70"
	I1227 19:58:46.219763  283325 cri.go:96] found id: "8628a6c29088613cf486c32e6965fba1763bf29fdb83553c63ccc780c26e53e1"
	I1227 19:58:46.219766  283325 cri.go:96] found id: "8b5869c0f3a2d08ab29fe49e15a8657d278dcd90725095d352af4569975b4092"
	I1227 19:58:46.219770  283325 cri.go:96] found id: "4bfa93abefaefe59e582f297d87729a6eae0fb1a9a0a16200706276a345bd9f5"
	I1227 19:58:46.219773  283325 cri.go:96] found id: "db248b3a0ad2c188ffc0bd6285c801a114ef2a163b57e80967eed7058d10b079"
	I1227 19:58:46.219776  283325 cri.go:96] found id: "14cd7f4eeeb99b20541874eb2f68a4348b74b68c609a377d4e92df28a82336e2"
	I1227 19:58:46.219779  283325 cri.go:96] found id: "4060cf97180d981d325d0d9e6eb59ef66cf733a447793c3c3c19354ffe8cc564"
	I1227 19:58:46.219785  283325 cri.go:96] found id: "63f13e1a463440ef808022bc00191127d600efb20f8a410beeafe0ff3eba5e18"
	I1227 19:58:46.219788  283325 cri.go:96] found id: "76dabe108a8cdf25e513799e72a1701938261cabbaf7677deb7cf44b74e6693e"
	I1227 19:58:46.219791  283325 cri.go:96] found id: "5069e3ca60ffbe2dae6fb5bf95131972cc927b0230086e1374f3aa33984f9a66"
	I1227 19:58:46.219794  283325 cri.go:96] found id: "59198603f9dcd704e4c9bf1e3690d726408d5fbe97ca91fbee22d027956132a4"
	I1227 19:58:46.219797  283325 cri.go:96] found id: "6b3af5ed669f8def1398487b65fab3dc84efc5016bc4e43413569ae9cf491fae"
	I1227 19:58:46.219800  283325 cri.go:96] found id: "78bf49f8848895542e9d07cd088de90af51710dcb99e3756d7a1ae5577d88b11"
	I1227 19:58:46.219809  283325 cri.go:96] found id: "affa3c0ed51244e760e065712febf0f9b147fb070147eea321d9eccfb748d170"
	I1227 19:58:46.219812  283325 cri.go:96] found id: "2d16e4494d6c0f9e5b1eff95ad99704e381224b2dd00e0b326a3c2f5fdbe920c"
	I1227 19:58:46.219817  283325 cri.go:96] found id: "acf0f77c3c7122dd9f3b2603143da9d067c290830c8d6e96ae769b64065a6f69"
	I1227 19:58:46.219821  283325 cri.go:96] found id: "527b6bec92865051022b476e87f4a56edd36b846695cf95de5df71efbc3328fd"
	I1227 19:58:46.219824  283325 cri.go:96] found id: "f6bd3ae635b96e69ae3cbe10aa5433f6dad9b05cc38a83cac219a431275bfa26"
	I1227 19:58:46.219828  283325 cri.go:96] found id: "665372886f2f5a56019a7acc4aaba64773a2800425add6614a10ed8d31727212"
	I1227 19:58:46.219831  283325 cri.go:96] found id: "46d7e3e9cbbf8984d7ddb16c6a136272c405ce68b4de5cfb0b73b424a67b97ba"
	I1227 19:58:46.219835  283325 cri.go:96] found id: "2339b22f5aa5ad86cbc68a8ee3d73f387563e2d32cb08c5b0ebd3fe231f755bf"
	I1227 19:58:46.219838  283325 cri.go:96] found id: ""
	I1227 19:58:46.219891  283325 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:58:46.234265  283325 out.go:203] 
	W1227 19:58:46.237232  283325 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:58:46.237255  283325 out.go:285] * 
	* 
	W1227 19:58:46.240165  283325 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:58:46.243271  283325 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-686526 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (8.53s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-mqphn" [0b9a80b7-86f9-467a-b53c-cc98aedca5fc] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003218072s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-686526 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-686526 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (250.99116ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:58:37.516576  282752 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:58:37.517604  282752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:37.517622  282752 out.go:374] Setting ErrFile to fd 2...
	I1227 19:58:37.517630  282752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:37.518045  282752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 19:58:37.518428  282752 mustload.go:66] Loading cluster: addons-686526
	I1227 19:58:37.519091  282752 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:37.519116  282752 addons.go:622] checking whether the cluster is paused
	I1227 19:58:37.519284  282752 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:37.519302  282752 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:58:37.520073  282752 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:58:37.538259  282752 ssh_runner.go:195] Run: systemctl --version
	I1227 19:58:37.538314  282752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:58:37.558922  282752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:58:37.660074  282752 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:58:37.660155  282752 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:58:37.693728  282752 cri.go:96] found id: "f7d0de1b69961a288553903b4992c35c22e5e36f33340aca5549666ccae62780"
	I1227 19:58:37.693751  282752 cri.go:96] found id: "aeb61d6ca819d545ff012b4bb79b4b12a2d3e4fd7c017b8aff9b3f73e7e4fd72"
	I1227 19:58:37.693755  282752 cri.go:96] found id: "76e4419cf17d2bcb37cb37c985de6da6879900fefbf8d8753bdc6f8b601a7b70"
	I1227 19:58:37.693759  282752 cri.go:96] found id: "8628a6c29088613cf486c32e6965fba1763bf29fdb83553c63ccc780c26e53e1"
	I1227 19:58:37.693762  282752 cri.go:96] found id: "8b5869c0f3a2d08ab29fe49e15a8657d278dcd90725095d352af4569975b4092"
	I1227 19:58:37.693765  282752 cri.go:96] found id: "4bfa93abefaefe59e582f297d87729a6eae0fb1a9a0a16200706276a345bd9f5"
	I1227 19:58:37.693768  282752 cri.go:96] found id: "db248b3a0ad2c188ffc0bd6285c801a114ef2a163b57e80967eed7058d10b079"
	I1227 19:58:37.693771  282752 cri.go:96] found id: "14cd7f4eeeb99b20541874eb2f68a4348b74b68c609a377d4e92df28a82336e2"
	I1227 19:58:37.693774  282752 cri.go:96] found id: "4060cf97180d981d325d0d9e6eb59ef66cf733a447793c3c3c19354ffe8cc564"
	I1227 19:58:37.693784  282752 cri.go:96] found id: "63f13e1a463440ef808022bc00191127d600efb20f8a410beeafe0ff3eba5e18"
	I1227 19:58:37.693788  282752 cri.go:96] found id: "76dabe108a8cdf25e513799e72a1701938261cabbaf7677deb7cf44b74e6693e"
	I1227 19:58:37.693791  282752 cri.go:96] found id: "5069e3ca60ffbe2dae6fb5bf95131972cc927b0230086e1374f3aa33984f9a66"
	I1227 19:58:37.693794  282752 cri.go:96] found id: "59198603f9dcd704e4c9bf1e3690d726408d5fbe97ca91fbee22d027956132a4"
	I1227 19:58:37.693797  282752 cri.go:96] found id: "6b3af5ed669f8def1398487b65fab3dc84efc5016bc4e43413569ae9cf491fae"
	I1227 19:58:37.693801  282752 cri.go:96] found id: "78bf49f8848895542e9d07cd088de90af51710dcb99e3756d7a1ae5577d88b11"
	I1227 19:58:37.693806  282752 cri.go:96] found id: "affa3c0ed51244e760e065712febf0f9b147fb070147eea321d9eccfb748d170"
	I1227 19:58:37.693809  282752 cri.go:96] found id: "2d16e4494d6c0f9e5b1eff95ad99704e381224b2dd00e0b326a3c2f5fdbe920c"
	I1227 19:58:37.693813  282752 cri.go:96] found id: "acf0f77c3c7122dd9f3b2603143da9d067c290830c8d6e96ae769b64065a6f69"
	I1227 19:58:37.693816  282752 cri.go:96] found id: "527b6bec92865051022b476e87f4a56edd36b846695cf95de5df71efbc3328fd"
	I1227 19:58:37.693819  282752 cri.go:96] found id: "f6bd3ae635b96e69ae3cbe10aa5433f6dad9b05cc38a83cac219a431275bfa26"
	I1227 19:58:37.693837  282752 cri.go:96] found id: "665372886f2f5a56019a7acc4aaba64773a2800425add6614a10ed8d31727212"
	I1227 19:58:37.693840  282752 cri.go:96] found id: "46d7e3e9cbbf8984d7ddb16c6a136272c405ce68b4de5cfb0b73b424a67b97ba"
	I1227 19:58:37.693843  282752 cri.go:96] found id: "2339b22f5aa5ad86cbc68a8ee3d73f387563e2d32cb08c5b0ebd3fe231f755bf"
	I1227 19:58:37.693846  282752 cri.go:96] found id: ""
	I1227 19:58:37.693894  282752 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:58:37.709832  282752 out.go:203] 
	W1227 19:58:37.712676  282752 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:58:37.712709  282752 out.go:285] * 
	* 
	W1227 19:58:37.715595  282752 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:58:37.718515  282752 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-686526 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 5.613508ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-vd5cx" [666576fc-47dd-48ca-96e6-f651beb8216a] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003778123s
addons_test.go:465: (dbg) Run:  kubectl --context addons-686526 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-686526 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-686526 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (257.907698ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:58:32.257925  282689 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:58:32.258924  282689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:32.258940  282689 out.go:374] Setting ErrFile to fd 2...
	I1227 19:58:32.258946  282689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:32.259192  282689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 19:58:32.259489  282689 mustload.go:66] Loading cluster: addons-686526
	I1227 19:58:32.259855  282689 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:32.259877  282689 addons.go:622] checking whether the cluster is paused
	I1227 19:58:32.259992  282689 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:32.260002  282689 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:58:32.260503  282689 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:58:32.279823  282689 ssh_runner.go:195] Run: systemctl --version
	I1227 19:58:32.279884  282689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:58:32.300843  282689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:58:32.407402  282689 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:58:32.407483  282689 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:58:32.436732  282689 cri.go:96] found id: "f7d0de1b69961a288553903b4992c35c22e5e36f33340aca5549666ccae62780"
	I1227 19:58:32.436759  282689 cri.go:96] found id: "aeb61d6ca819d545ff012b4bb79b4b12a2d3e4fd7c017b8aff9b3f73e7e4fd72"
	I1227 19:58:32.436764  282689 cri.go:96] found id: "76e4419cf17d2bcb37cb37c985de6da6879900fefbf8d8753bdc6f8b601a7b70"
	I1227 19:58:32.436768  282689 cri.go:96] found id: "8628a6c29088613cf486c32e6965fba1763bf29fdb83553c63ccc780c26e53e1"
	I1227 19:58:32.436772  282689 cri.go:96] found id: "8b5869c0f3a2d08ab29fe49e15a8657d278dcd90725095d352af4569975b4092"
	I1227 19:58:32.436775  282689 cri.go:96] found id: "4bfa93abefaefe59e582f297d87729a6eae0fb1a9a0a16200706276a345bd9f5"
	I1227 19:58:32.436779  282689 cri.go:96] found id: "db248b3a0ad2c188ffc0bd6285c801a114ef2a163b57e80967eed7058d10b079"
	I1227 19:58:32.436782  282689 cri.go:96] found id: "14cd7f4eeeb99b20541874eb2f68a4348b74b68c609a377d4e92df28a82336e2"
	I1227 19:58:32.436785  282689 cri.go:96] found id: "4060cf97180d981d325d0d9e6eb59ef66cf733a447793c3c3c19354ffe8cc564"
	I1227 19:58:32.436791  282689 cri.go:96] found id: "63f13e1a463440ef808022bc00191127d600efb20f8a410beeafe0ff3eba5e18"
	I1227 19:58:32.436795  282689 cri.go:96] found id: "76dabe108a8cdf25e513799e72a1701938261cabbaf7677deb7cf44b74e6693e"
	I1227 19:58:32.436798  282689 cri.go:96] found id: "5069e3ca60ffbe2dae6fb5bf95131972cc927b0230086e1374f3aa33984f9a66"
	I1227 19:58:32.436802  282689 cri.go:96] found id: "59198603f9dcd704e4c9bf1e3690d726408d5fbe97ca91fbee22d027956132a4"
	I1227 19:58:32.436809  282689 cri.go:96] found id: "6b3af5ed669f8def1398487b65fab3dc84efc5016bc4e43413569ae9cf491fae"
	I1227 19:58:32.436813  282689 cri.go:96] found id: "78bf49f8848895542e9d07cd088de90af51710dcb99e3756d7a1ae5577d88b11"
	I1227 19:58:32.436818  282689 cri.go:96] found id: "affa3c0ed51244e760e065712febf0f9b147fb070147eea321d9eccfb748d170"
	I1227 19:58:32.436822  282689 cri.go:96] found id: "2d16e4494d6c0f9e5b1eff95ad99704e381224b2dd00e0b326a3c2f5fdbe920c"
	I1227 19:58:32.436826  282689 cri.go:96] found id: "acf0f77c3c7122dd9f3b2603143da9d067c290830c8d6e96ae769b64065a6f69"
	I1227 19:58:32.436830  282689 cri.go:96] found id: "527b6bec92865051022b476e87f4a56edd36b846695cf95de5df71efbc3328fd"
	I1227 19:58:32.436833  282689 cri.go:96] found id: "f6bd3ae635b96e69ae3cbe10aa5433f6dad9b05cc38a83cac219a431275bfa26"
	I1227 19:58:32.436838  282689 cri.go:96] found id: "665372886f2f5a56019a7acc4aaba64773a2800425add6614a10ed8d31727212"
	I1227 19:58:32.436842  282689 cri.go:96] found id: "46d7e3e9cbbf8984d7ddb16c6a136272c405ce68b4de5cfb0b73b424a67b97ba"
	I1227 19:58:32.436845  282689 cri.go:96] found id: "2339b22f5aa5ad86cbc68a8ee3d73f387563e2d32cb08c5b0ebd3fe231f755bf"
	I1227 19:58:32.436852  282689 cri.go:96] found id: ""
	I1227 19:58:32.436910  282689 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:58:32.453136  282689 out.go:203] 
	W1227 19:58:32.456112  282689 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:58:32.456137  282689 out.go:285] * 
	* 
	W1227 19:58:32.459145  282689 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:58:32.462204  282689 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-686526 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (32.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1227 19:58:23.914881  274336 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1227 19:58:23.918857  274336 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1227 19:58:23.918879  274336 kapi.go:107] duration metric: took 4.019326ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 4.029607ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-686526 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-686526 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-686526 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-686526 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-686526 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-686526 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-686526 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-686526 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [9b5047c3-3585-48bc-88cc-136133977818] Pending
helpers_test.go:353: "task-pv-pod" [9b5047c3-3585-48bc-88cc-136133977818] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [9b5047c3-3585-48bc-88cc-136133977818] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.00451684s
addons_test.go:574: (dbg) Run:  kubectl --context addons-686526 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-686526 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-686526 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-686526 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-686526 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-686526 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-686526 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-686526 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-686526 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-686526 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-686526 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-686526 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-686526 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-686526 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [58893489-a462-4bdf-8f49-761e30f81dde] Pending
helpers_test.go:353: "task-pv-pod-restore" [58893489-a462-4bdf-8f49-761e30f81dde] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [58893489-a462-4bdf-8f49-761e30f81dde] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003212399s
addons_test.go:616: (dbg) Run:  kubectl --context addons-686526 delete pod task-pv-pod-restore
addons_test.go:616: (dbg) Done: kubectl --context addons-686526 delete pod task-pv-pod-restore: (1.117127311s)
addons_test.go:620: (dbg) Run:  kubectl --context addons-686526 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-686526 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-686526 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-686526 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (246.987137ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:58:56.452192  283635 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:58:56.452979  283635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:56.452996  283635 out.go:374] Setting ErrFile to fd 2...
	I1227 19:58:56.453002  283635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:56.453305  283635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 19:58:56.453707  283635 mustload.go:66] Loading cluster: addons-686526
	I1227 19:58:56.454163  283635 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:56.454188  283635 addons.go:622] checking whether the cluster is paused
	I1227 19:58:56.454333  283635 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:56.454351  283635 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:58:56.454884  283635 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:58:56.471852  283635 ssh_runner.go:195] Run: systemctl --version
	I1227 19:58:56.471921  283635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:58:56.488580  283635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:58:56.588015  283635 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:58:56.588105  283635 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:58:56.615883  283635 cri.go:96] found id: "f7d0de1b69961a288553903b4992c35c22e5e36f33340aca5549666ccae62780"
	I1227 19:58:56.615902  283635 cri.go:96] found id: "aeb61d6ca819d545ff012b4bb79b4b12a2d3e4fd7c017b8aff9b3f73e7e4fd72"
	I1227 19:58:56.615907  283635 cri.go:96] found id: "76e4419cf17d2bcb37cb37c985de6da6879900fefbf8d8753bdc6f8b601a7b70"
	I1227 19:58:56.615910  283635 cri.go:96] found id: "8628a6c29088613cf486c32e6965fba1763bf29fdb83553c63ccc780c26e53e1"
	I1227 19:58:56.615913  283635 cri.go:96] found id: "8b5869c0f3a2d08ab29fe49e15a8657d278dcd90725095d352af4569975b4092"
	I1227 19:58:56.615917  283635 cri.go:96] found id: "4bfa93abefaefe59e582f297d87729a6eae0fb1a9a0a16200706276a345bd9f5"
	I1227 19:58:56.615920  283635 cri.go:96] found id: "db248b3a0ad2c188ffc0bd6285c801a114ef2a163b57e80967eed7058d10b079"
	I1227 19:58:56.615923  283635 cri.go:96] found id: "14cd7f4eeeb99b20541874eb2f68a4348b74b68c609a377d4e92df28a82336e2"
	I1227 19:58:56.615926  283635 cri.go:96] found id: "4060cf97180d981d325d0d9e6eb59ef66cf733a447793c3c3c19354ffe8cc564"
	I1227 19:58:56.615933  283635 cri.go:96] found id: "63f13e1a463440ef808022bc00191127d600efb20f8a410beeafe0ff3eba5e18"
	I1227 19:58:56.615936  283635 cri.go:96] found id: "76dabe108a8cdf25e513799e72a1701938261cabbaf7677deb7cf44b74e6693e"
	I1227 19:58:56.615939  283635 cri.go:96] found id: "5069e3ca60ffbe2dae6fb5bf95131972cc927b0230086e1374f3aa33984f9a66"
	I1227 19:58:56.615942  283635 cri.go:96] found id: "59198603f9dcd704e4c9bf1e3690d726408d5fbe97ca91fbee22d027956132a4"
	I1227 19:58:56.615945  283635 cri.go:96] found id: "6b3af5ed669f8def1398487b65fab3dc84efc5016bc4e43413569ae9cf491fae"
	I1227 19:58:56.615948  283635 cri.go:96] found id: "78bf49f8848895542e9d07cd088de90af51710dcb99e3756d7a1ae5577d88b11"
	I1227 19:58:56.615956  283635 cri.go:96] found id: "affa3c0ed51244e760e065712febf0f9b147fb070147eea321d9eccfb748d170"
	I1227 19:58:56.615959  283635 cri.go:96] found id: "2d16e4494d6c0f9e5b1eff95ad99704e381224b2dd00e0b326a3c2f5fdbe920c"
	I1227 19:58:56.615964  283635 cri.go:96] found id: "acf0f77c3c7122dd9f3b2603143da9d067c290830c8d6e96ae769b64065a6f69"
	I1227 19:58:56.615967  283635 cri.go:96] found id: "527b6bec92865051022b476e87f4a56edd36b846695cf95de5df71efbc3328fd"
	I1227 19:58:56.615970  283635 cri.go:96] found id: "f6bd3ae635b96e69ae3cbe10aa5433f6dad9b05cc38a83cac219a431275bfa26"
	I1227 19:58:56.615975  283635 cri.go:96] found id: "665372886f2f5a56019a7acc4aaba64773a2800425add6614a10ed8d31727212"
	I1227 19:58:56.615978  283635 cri.go:96] found id: "46d7e3e9cbbf8984d7ddb16c6a136272c405ce68b4de5cfb0b73b424a67b97ba"
	I1227 19:58:56.615981  283635 cri.go:96] found id: "2339b22f5aa5ad86cbc68a8ee3d73f387563e2d32cb08c5b0ebd3fe231f755bf"
	I1227 19:58:56.615983  283635 cri.go:96] found id: ""
	I1227 19:58:56.616032  283635 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:58:56.630224  283635 out.go:203] 
	W1227 19:58:56.633159  283635 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:58:56.633182  283635 out.go:285] * 
	* 
	W1227 19:58:56.636147  283635 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:58:56.639154  283635 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-686526 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-686526 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-686526 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (243.805885ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:58:56.692302  283678 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:58:56.693383  283678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:56.693398  283678 out.go:374] Setting ErrFile to fd 2...
	I1227 19:58:56.693404  283678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:56.693722  283678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 19:58:56.694062  283678 mustload.go:66] Loading cluster: addons-686526
	I1227 19:58:56.694472  283678 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:56.694508  283678 addons.go:622] checking whether the cluster is paused
	I1227 19:58:56.694645  283678 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:56.694663  283678 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:58:56.695233  283678 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:58:56.712420  283678 ssh_runner.go:195] Run: systemctl --version
	I1227 19:58:56.712481  283678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:58:56.728969  283678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:58:56.827936  283678 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:58:56.828029  283678 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:58:56.859322  283678 cri.go:96] found id: "f7d0de1b69961a288553903b4992c35c22e5e36f33340aca5549666ccae62780"
	I1227 19:58:56.859345  283678 cri.go:96] found id: "aeb61d6ca819d545ff012b4bb79b4b12a2d3e4fd7c017b8aff9b3f73e7e4fd72"
	I1227 19:58:56.859349  283678 cri.go:96] found id: "76e4419cf17d2bcb37cb37c985de6da6879900fefbf8d8753bdc6f8b601a7b70"
	I1227 19:58:56.859353  283678 cri.go:96] found id: "8628a6c29088613cf486c32e6965fba1763bf29fdb83553c63ccc780c26e53e1"
	I1227 19:58:56.859356  283678 cri.go:96] found id: "8b5869c0f3a2d08ab29fe49e15a8657d278dcd90725095d352af4569975b4092"
	I1227 19:58:56.859360  283678 cri.go:96] found id: "4bfa93abefaefe59e582f297d87729a6eae0fb1a9a0a16200706276a345bd9f5"
	I1227 19:58:56.859363  283678 cri.go:96] found id: "db248b3a0ad2c188ffc0bd6285c801a114ef2a163b57e80967eed7058d10b079"
	I1227 19:58:56.859365  283678 cri.go:96] found id: "14cd7f4eeeb99b20541874eb2f68a4348b74b68c609a377d4e92df28a82336e2"
	I1227 19:58:56.859368  283678 cri.go:96] found id: "4060cf97180d981d325d0d9e6eb59ef66cf733a447793c3c3c19354ffe8cc564"
	I1227 19:58:56.859374  283678 cri.go:96] found id: "63f13e1a463440ef808022bc00191127d600efb20f8a410beeafe0ff3eba5e18"
	I1227 19:58:56.859382  283678 cri.go:96] found id: "76dabe108a8cdf25e513799e72a1701938261cabbaf7677deb7cf44b74e6693e"
	I1227 19:58:56.859385  283678 cri.go:96] found id: "5069e3ca60ffbe2dae6fb5bf95131972cc927b0230086e1374f3aa33984f9a66"
	I1227 19:58:56.859388  283678 cri.go:96] found id: "59198603f9dcd704e4c9bf1e3690d726408d5fbe97ca91fbee22d027956132a4"
	I1227 19:58:56.859391  283678 cri.go:96] found id: "6b3af5ed669f8def1398487b65fab3dc84efc5016bc4e43413569ae9cf491fae"
	I1227 19:58:56.859396  283678 cri.go:96] found id: "78bf49f8848895542e9d07cd088de90af51710dcb99e3756d7a1ae5577d88b11"
	I1227 19:58:56.859406  283678 cri.go:96] found id: "affa3c0ed51244e760e065712febf0f9b147fb070147eea321d9eccfb748d170"
	I1227 19:58:56.859409  283678 cri.go:96] found id: "2d16e4494d6c0f9e5b1eff95ad99704e381224b2dd00e0b326a3c2f5fdbe920c"
	I1227 19:58:56.859412  283678 cri.go:96] found id: "acf0f77c3c7122dd9f3b2603143da9d067c290830c8d6e96ae769b64065a6f69"
	I1227 19:58:56.859415  283678 cri.go:96] found id: "527b6bec92865051022b476e87f4a56edd36b846695cf95de5df71efbc3328fd"
	I1227 19:58:56.859418  283678 cri.go:96] found id: "f6bd3ae635b96e69ae3cbe10aa5433f6dad9b05cc38a83cac219a431275bfa26"
	I1227 19:58:56.859422  283678 cri.go:96] found id: "665372886f2f5a56019a7acc4aaba64773a2800425add6614a10ed8d31727212"
	I1227 19:58:56.859432  283678 cri.go:96] found id: "46d7e3e9cbbf8984d7ddb16c6a136272c405ce68b4de5cfb0b73b424a67b97ba"
	I1227 19:58:56.859435  283678 cri.go:96] found id: "2339b22f5aa5ad86cbc68a8ee3d73f387563e2d32cb08c5b0ebd3fe231f755bf"
	I1227 19:58:56.859438  283678 cri.go:96] found id: ""
	I1227 19:58:56.859493  283678 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:58:56.874799  283678 out.go:203] 
	W1227 19:58:56.877670  283678 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:58:56.877694  283678 out.go:285] * 
	* 
	W1227 19:58:56.880647  283678 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:58:56.883555  283678 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-686526 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (32.98s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-686526 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-686526 --alsologtostderr -v=1: exit status 11 (397.824853ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:58:23.793750  281882 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:58:23.794500  281882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:23.794511  281882 out.go:374] Setting ErrFile to fd 2...
	I1227 19:58:23.794516  281882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:23.794782  281882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 19:58:23.795121  281882 mustload.go:66] Loading cluster: addons-686526
	I1227 19:58:23.795487  281882 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:23.795508  281882 addons.go:622] checking whether the cluster is paused
	I1227 19:58:23.795624  281882 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:23.795634  281882 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:58:23.796235  281882 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:58:23.815525  281882 ssh_runner.go:195] Run: systemctl --version
	I1227 19:58:23.815591  281882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:58:23.834977  281882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:58:24.030057  281882 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:58:24.030149  281882 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:58:24.084324  281882 cri.go:96] found id: "f7d0de1b69961a288553903b4992c35c22e5e36f33340aca5549666ccae62780"
	I1227 19:58:24.084353  281882 cri.go:96] found id: "aeb61d6ca819d545ff012b4bb79b4b12a2d3e4fd7c017b8aff9b3f73e7e4fd72"
	I1227 19:58:24.084359  281882 cri.go:96] found id: "76e4419cf17d2bcb37cb37c985de6da6879900fefbf8d8753bdc6f8b601a7b70"
	I1227 19:58:24.084363  281882 cri.go:96] found id: "8628a6c29088613cf486c32e6965fba1763bf29fdb83553c63ccc780c26e53e1"
	I1227 19:58:24.084366  281882 cri.go:96] found id: "8b5869c0f3a2d08ab29fe49e15a8657d278dcd90725095d352af4569975b4092"
	I1227 19:58:24.084370  281882 cri.go:96] found id: "4bfa93abefaefe59e582f297d87729a6eae0fb1a9a0a16200706276a345bd9f5"
	I1227 19:58:24.084373  281882 cri.go:96] found id: "db248b3a0ad2c188ffc0bd6285c801a114ef2a163b57e80967eed7058d10b079"
	I1227 19:58:24.084376  281882 cri.go:96] found id: "14cd7f4eeeb99b20541874eb2f68a4348b74b68c609a377d4e92df28a82336e2"
	I1227 19:58:24.084379  281882 cri.go:96] found id: "4060cf97180d981d325d0d9e6eb59ef66cf733a447793c3c3c19354ffe8cc564"
	I1227 19:58:24.084392  281882 cri.go:96] found id: "63f13e1a463440ef808022bc00191127d600efb20f8a410beeafe0ff3eba5e18"
	I1227 19:58:24.084396  281882 cri.go:96] found id: "76dabe108a8cdf25e513799e72a1701938261cabbaf7677deb7cf44b74e6693e"
	I1227 19:58:24.084399  281882 cri.go:96] found id: "5069e3ca60ffbe2dae6fb5bf95131972cc927b0230086e1374f3aa33984f9a66"
	I1227 19:58:24.084403  281882 cri.go:96] found id: "59198603f9dcd704e4c9bf1e3690d726408d5fbe97ca91fbee22d027956132a4"
	I1227 19:58:24.084406  281882 cri.go:96] found id: "6b3af5ed669f8def1398487b65fab3dc84efc5016bc4e43413569ae9cf491fae"
	I1227 19:58:24.084409  281882 cri.go:96] found id: "78bf49f8848895542e9d07cd088de90af51710dcb99e3756d7a1ae5577d88b11"
	I1227 19:58:24.084417  281882 cri.go:96] found id: "affa3c0ed51244e760e065712febf0f9b147fb070147eea321d9eccfb748d170"
	I1227 19:58:24.084420  281882 cri.go:96] found id: "2d16e4494d6c0f9e5b1eff95ad99704e381224b2dd00e0b326a3c2f5fdbe920c"
	I1227 19:58:24.084425  281882 cri.go:96] found id: "acf0f77c3c7122dd9f3b2603143da9d067c290830c8d6e96ae769b64065a6f69"
	I1227 19:58:24.084428  281882 cri.go:96] found id: "527b6bec92865051022b476e87f4a56edd36b846695cf95de5df71efbc3328fd"
	I1227 19:58:24.084431  281882 cri.go:96] found id: "f6bd3ae635b96e69ae3cbe10aa5433f6dad9b05cc38a83cac219a431275bfa26"
	I1227 19:58:24.084435  281882 cri.go:96] found id: "665372886f2f5a56019a7acc4aaba64773a2800425add6614a10ed8d31727212"
	I1227 19:58:24.084438  281882 cri.go:96] found id: "46d7e3e9cbbf8984d7ddb16c6a136272c405ce68b4de5cfb0b73b424a67b97ba"
	I1227 19:58:24.084441  281882 cri.go:96] found id: "2339b22f5aa5ad86cbc68a8ee3d73f387563e2d32cb08c5b0ebd3fe231f755bf"
	I1227 19:58:24.084444  281882 cri.go:96] found id: ""
	I1227 19:58:24.084494  281882 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:58:24.104233  281882 out.go:203] 
	W1227 19:58:24.107407  281882 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:58:24.107441  281882 out.go:285] * 
	* 
	W1227 19:58:24.115776  281882 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:58:24.120620  281882 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-686526 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-686526
helpers_test.go:244: (dbg) docker inspect addons-686526:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9ea7fe87471b8694080849a8594d8c88b258a060dc9a4bf4fa5c68a0fbc5552e",
	        "Created": "2025-12-27T19:55:58.69087776Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 275497,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T19:55:58.754926489Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/9ea7fe87471b8694080849a8594d8c88b258a060dc9a4bf4fa5c68a0fbc5552e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9ea7fe87471b8694080849a8594d8c88b258a060dc9a4bf4fa5c68a0fbc5552e/hostname",
	        "HostsPath": "/var/lib/docker/containers/9ea7fe87471b8694080849a8594d8c88b258a060dc9a4bf4fa5c68a0fbc5552e/hosts",
	        "LogPath": "/var/lib/docker/containers/9ea7fe87471b8694080849a8594d8c88b258a060dc9a4bf4fa5c68a0fbc5552e/9ea7fe87471b8694080849a8594d8c88b258a060dc9a4bf4fa5c68a0fbc5552e-json.log",
	        "Name": "/addons-686526",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-686526:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-686526",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9ea7fe87471b8694080849a8594d8c88b258a060dc9a4bf4fa5c68a0fbc5552e",
	                "LowerDir": "/var/lib/docker/overlay2/97a6f82d950ceb6214f63013c3334226d9e404f95cc862b9e7a1071dbda194d4-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/97a6f82d950ceb6214f63013c3334226d9e404f95cc862b9e7a1071dbda194d4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/97a6f82d950ceb6214f63013c3334226d9e404f95cc862b9e7a1071dbda194d4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/97a6f82d950ceb6214f63013c3334226d9e404f95cc862b9e7a1071dbda194d4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-686526",
	                "Source": "/var/lib/docker/volumes/addons-686526/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-686526",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-686526",
	                "name.minikube.sigs.k8s.io": "addons-686526",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e67cbb3a2fe444befd5c3a03d8ffb77c74432a76827b88d8068d91e80567aa6b",
	            "SandboxKey": "/var/run/docker/netns/e67cbb3a2fe4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-686526": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:b1:ff:da:f8:9e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9efe49c88df093743d6396b155c99e8e560bc78b1faa50dd9a83ad4aea9b853e",
	                    "EndpointID": "54cb5832eb11c8bb11c2c3f36e35abefc36fce8df26ac978c9b2b01711d49b5c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-686526",
	                        "9ea7fe87471b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-686526 -n addons-686526
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-686526 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-686526 logs -n 25: (1.482936888s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-536076 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-536076   │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ delete  │ -p download-only-536076                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-536076   │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ start   │ -o=json --download-only -p download-only-540569 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-540569   │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ delete  │ -p download-only-540569                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-540569   │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ delete  │ -p download-only-536076                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-536076   │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ delete  │ -p download-only-540569                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-540569   │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ start   │ --download-only -p download-docker-559752 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-559752 │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │                     │
	│ delete  │ -p download-docker-559752                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-559752 │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ start   │ --download-only -p binary-mirror-845718 --alsologtostderr --binary-mirror http://127.0.0.1:41507 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-845718   │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │                     │
	│ delete  │ -p binary-mirror-845718                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-845718   │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ addons  │ enable dashboard -p addons-686526                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-686526          │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-686526                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-686526          │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │                     │
	│ start   │ -p addons-686526 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-686526          │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:57 UTC │
	│ addons  │ addons-686526 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-686526          │ jenkins │ v1.37.0 │ 27 Dec 25 19:57 UTC │                     │
	│ addons  │ addons-686526 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-686526          │ jenkins │ v1.37.0 │ 27 Dec 25 19:58 UTC │                     │
	│ addons  │ addons-686526 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-686526          │ jenkins │ v1.37.0 │ 27 Dec 25 19:58 UTC │                     │
	│ addons  │ addons-686526 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-686526          │ jenkins │ v1.37.0 │ 27 Dec 25 19:58 UTC │                     │
	│ ip      │ addons-686526 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-686526          │ jenkins │ v1.37.0 │ 27 Dec 25 19:58 UTC │ 27 Dec 25 19:58 UTC │
	│ addons  │ addons-686526 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-686526          │ jenkins │ v1.37.0 │ 27 Dec 25 19:58 UTC │                     │
	│ ssh     │ addons-686526 ssh cat /opt/local-path-provisioner/pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-686526          │ jenkins │ v1.37.0 │ 27 Dec 25 19:58 UTC │ 27 Dec 25 19:58 UTC │
	│ addons  │ addons-686526 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-686526          │ jenkins │ v1.37.0 │ 27 Dec 25 19:58 UTC │                     │
	│ addons  │ addons-686526 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-686526          │ jenkins │ v1.37.0 │ 27 Dec 25 19:58 UTC │                     │
	│ addons  │ enable headlamp -p addons-686526 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-686526          │ jenkins │ v1.37.0 │ 27 Dec 25 19:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 19:55:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 19:55:34.369146  275094 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:55:34.369361  275094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:55:34.369370  275094 out.go:374] Setting ErrFile to fd 2...
	I1227 19:55:34.369376  275094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:55:34.369698  275094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 19:55:34.370315  275094 out.go:368] Setting JSON to false
	I1227 19:55:34.371254  275094 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5887,"bootTime":1766859448,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 19:55:34.371324  275094 start.go:143] virtualization:  
	I1227 19:55:34.382344  275094 out.go:179] * [addons-686526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 19:55:34.393477  275094 notify.go:221] Checking for updates...
	I1227 19:55:34.393492  275094 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 19:55:34.405339  275094 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 19:55:34.419308  275094 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 19:55:34.429273  275094 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 19:55:34.442627  275094 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 19:55:34.453362  275094 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 19:55:34.475806  275094 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 19:55:34.495949  275094 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 19:55:34.496064  275094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 19:55:34.554687  275094 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-27 19:55:34.545296866 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 19:55:34.554788  275094 docker.go:319] overlay module found
	I1227 19:55:34.600449  275094 out.go:179] * Using the docker driver based on user configuration
	I1227 19:55:34.631160  275094 start.go:309] selected driver: docker
	I1227 19:55:34.631189  275094 start.go:928] validating driver "docker" against <nil>
	I1227 19:55:34.631205  275094 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 19:55:34.631911  275094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 19:55:34.692929  275094 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-27 19:55:34.683792986 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 19:55:34.693078  275094 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 19:55:34.693307  275094 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 19:55:34.709144  275094 out.go:179] * Using Docker driver with root privileges
	I1227 19:55:34.725165  275094 cni.go:84] Creating CNI manager for ""
	I1227 19:55:34.725254  275094 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 19:55:34.725270  275094 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 19:55:34.725348  275094 start.go:353] cluster config:
	{Name:addons-686526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-686526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s Rosetta:false}
	I1227 19:55:34.735803  275094 out.go:179] * Starting "addons-686526" primary control-plane node in "addons-686526" cluster
	I1227 19:55:34.751439  275094 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 19:55:34.765477  275094 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 19:55:34.792720  275094 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 19:55:34.792715  275094 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 19:55:34.792791  275094 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 19:55:34.792803  275094 cache.go:65] Caching tarball of preloaded images
	I1227 19:55:34.792883  275094 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 19:55:34.792892  275094 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 19:55:34.793269  275094 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/config.json ...
	I1227 19:55:34.793310  275094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/config.json: {Name:mke66933f8cacd447f398aef13cb68edbd7061ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:55:34.809547  275094 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 19:55:34.809695  275094 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory
	I1227 19:55:34.809717  275094 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory, skipping pull
	I1227 19:55:34.809722  275094 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in cache, skipping pull
	I1227 19:55:34.809729  275094 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a as a tarball
	I1227 19:55:34.809734  275094 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a from local cache
	I1227 19:55:52.828091  275094 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a from cached tarball
	I1227 19:55:52.828129  275094 cache.go:243] Successfully downloaded all kic artifacts
	I1227 19:55:52.828181  275094 start.go:360] acquireMachinesLock for addons-686526: {Name:mkf5bf8fc00cd6199199928be6b527695f82efc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 19:55:52.828306  275094 start.go:364] duration metric: took 105.401µs to acquireMachinesLock for "addons-686526"
	I1227 19:55:52.828336  275094 start.go:93] Provisioning new machine with config: &{Name:addons-686526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-686526 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 19:55:52.828407  275094 start.go:125] createHost starting for "" (driver="docker")
	I1227 19:55:52.833383  275094 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1227 19:55:52.833639  275094 start.go:159] libmachine.API.Create for "addons-686526" (driver="docker")
	I1227 19:55:52.833690  275094 client.go:173] LocalClient.Create starting
	I1227 19:55:52.833792  275094 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem
	I1227 19:55:53.147011  275094 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem
	I1227 19:55:53.485040  275094 cli_runner.go:164] Run: docker network inspect addons-686526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 19:55:53.501512  275094 cli_runner.go:211] docker network inspect addons-686526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 19:55:53.501598  275094 network_create.go:284] running [docker network inspect addons-686526] to gather additional debugging logs...
	I1227 19:55:53.501620  275094 cli_runner.go:164] Run: docker network inspect addons-686526
	W1227 19:55:53.516300  275094 cli_runner.go:211] docker network inspect addons-686526 returned with exit code 1
	I1227 19:55:53.516333  275094 network_create.go:287] error running [docker network inspect addons-686526]: docker network inspect addons-686526: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-686526 not found
	I1227 19:55:53.516347  275094 network_create.go:289] output of [docker network inspect addons-686526]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-686526 not found
	
	** /stderr **
	I1227 19:55:53.516431  275094 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 19:55:53.531939  275094 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b4cf30}
	I1227 19:55:53.531981  275094 network_create.go:124] attempt to create docker network addons-686526 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1227 19:55:53.532034  275094 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-686526 addons-686526
	I1227 19:55:53.591501  275094 network_create.go:108] docker network addons-686526 192.168.49.0/24 created
	I1227 19:55:53.591536  275094 kic.go:121] calculated static IP "192.168.49.2" for the "addons-686526" container
	I1227 19:55:53.591622  275094 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 19:55:53.605236  275094 cli_runner.go:164] Run: docker volume create addons-686526 --label name.minikube.sigs.k8s.io=addons-686526 --label created_by.minikube.sigs.k8s.io=true
	I1227 19:55:53.621819  275094 oci.go:103] Successfully created a docker volume addons-686526
	I1227 19:55:53.621908  275094 cli_runner.go:164] Run: docker run --rm --name addons-686526-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-686526 --entrypoint /usr/bin/test -v addons-686526:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 19:55:54.804286  275094 cli_runner.go:217] Completed: docker run --rm --name addons-686526-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-686526 --entrypoint /usr/bin/test -v addons-686526:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib: (1.182343108s)
	I1227 19:55:54.804330  275094 oci.go:107] Successfully prepared a docker volume addons-686526
	I1227 19:55:54.804373  275094 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 19:55:54.804388  275094 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 19:55:54.804451  275094 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-686526:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 19:55:58.625002  275094 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-686526:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.820496913s)
	I1227 19:55:58.625034  275094 kic.go:203] duration metric: took 3.820643487s to extract preloaded images to volume ...
	W1227 19:55:58.625159  275094 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 19:55:58.625321  275094 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 19:55:58.677037  275094 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-686526 --name addons-686526 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-686526 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-686526 --network addons-686526 --ip 192.168.49.2 --volume addons-686526:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 19:55:58.961105  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Running}}
	I1227 19:55:58.981989  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:55:59.009547  275094 cli_runner.go:164] Run: docker exec addons-686526 stat /var/lib/dpkg/alternatives/iptables
	I1227 19:55:59.070912  275094 oci.go:144] the created container "addons-686526" has a running status.
	I1227 19:55:59.070943  275094 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa...
	I1227 19:55:59.195878  275094 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 19:55:59.229746  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:55:59.259735  275094 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 19:55:59.259760  275094 kic_runner.go:114] Args: [docker exec --privileged addons-686526 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 19:55:59.318708  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:55:59.348280  275094 machine.go:94] provisionDockerMachine start ...
	I1227 19:55:59.348394  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:55:59.367840  275094 main.go:144] libmachine: Using SSH client type: native
	I1227 19:55:59.368160  275094 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 19:55:59.368176  275094 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 19:55:59.368731  275094 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 19:56:02.513075  275094 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-686526
	
	I1227 19:56:02.513102  275094 ubuntu.go:182] provisioning hostname "addons-686526"
	I1227 19:56:02.513166  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:02.530418  275094 main.go:144] libmachine: Using SSH client type: native
	I1227 19:56:02.530744  275094 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 19:56:02.530761  275094 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-686526 && echo "addons-686526" | sudo tee /etc/hostname
	I1227 19:56:02.678438  275094 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-686526
	
	I1227 19:56:02.678513  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:02.695525  275094 main.go:144] libmachine: Using SSH client type: native
	I1227 19:56:02.695839  275094 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 19:56:02.695856  275094 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-686526' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-686526/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-686526' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 19:56:02.833676  275094 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 19:56:02.833744  275094 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 19:56:02.833787  275094 ubuntu.go:190] setting up certificates
	I1227 19:56:02.833804  275094 provision.go:84] configureAuth start
	I1227 19:56:02.833863  275094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-686526
	I1227 19:56:02.850479  275094 provision.go:143] copyHostCerts
	I1227 19:56:02.850565  275094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 19:56:02.850693  275094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 19:56:02.850760  275094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 19:56:02.850818  275094 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.addons-686526 san=[127.0.0.1 192.168.49.2 addons-686526 localhost minikube]
	I1227 19:56:03.011453  275094 provision.go:177] copyRemoteCerts
	I1227 19:56:03.011519  275094 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 19:56:03.011559  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:03.028375  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:56:03.129006  275094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 19:56:03.146503  275094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 19:56:03.163260  275094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 19:56:03.180727  275094 provision.go:87] duration metric: took 346.893652ms to configureAuth
	I1227 19:56:03.180758  275094 ubuntu.go:206] setting minikube options for container-runtime
	I1227 19:56:03.180957  275094 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:56:03.181079  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:03.197497  275094 main.go:144] libmachine: Using SSH client type: native
	I1227 19:56:03.198004  275094 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1227 19:56:03.198035  275094 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 19:56:03.480979  275094 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 19:56:03.481001  275094 machine.go:97] duration metric: took 4.132698794s to provisionDockerMachine
	I1227 19:56:03.481012  275094 client.go:176] duration metric: took 10.647315884s to LocalClient.Create
	I1227 19:56:03.481031  275094 start.go:167] duration metric: took 10.647393405s to libmachine.API.Create "addons-686526"
	I1227 19:56:03.481038  275094 start.go:293] postStartSetup for "addons-686526" (driver="docker")
	I1227 19:56:03.481051  275094 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 19:56:03.481115  275094 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 19:56:03.481160  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:03.498799  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:56:03.601349  275094 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 19:56:03.604462  275094 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 19:56:03.604494  275094 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 19:56:03.604519  275094 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 19:56:03.604602  275094 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 19:56:03.604627  275094 start.go:296] duration metric: took 123.581107ms for postStartSetup
	I1227 19:56:03.604950  275094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-686526
	I1227 19:56:03.620981  275094 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/config.json ...
	I1227 19:56:03.621297  275094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 19:56:03.621340  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:03.637747  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:56:03.734249  275094 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 19:56:03.738875  275094 start.go:128] duration metric: took 10.910453004s to createHost
	I1227 19:56:03.738902  275094 start.go:83] releasing machines lock for "addons-686526", held for 10.910584702s
	I1227 19:56:03.738972  275094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-686526
	I1227 19:56:03.757717  275094 ssh_runner.go:195] Run: cat /version.json
	I1227 19:56:03.757773  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:03.757839  275094 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 19:56:03.757901  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:03.774896  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:56:03.790926  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:56:03.877201  275094 ssh_runner.go:195] Run: systemctl --version
	I1227 19:56:03.972281  275094 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 19:56:04.011710  275094 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 19:56:04.015945  275094 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 19:56:04.016016  275094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 19:56:04.045339  275094 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 19:56:04.045364  275094 start.go:496] detecting cgroup driver to use...
	I1227 19:56:04.045397  275094 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 19:56:04.045489  275094 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 19:56:04.063711  275094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 19:56:04.076282  275094 docker.go:218] disabling cri-docker service (if available) ...
	I1227 19:56:04.076365  275094 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 19:56:04.094273  275094 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 19:56:04.111944  275094 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 19:56:04.237783  275094 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 19:56:04.366781  275094 docker.go:234] disabling docker service ...
	I1227 19:56:04.366845  275094 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 19:56:04.387715  275094 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 19:56:04.400706  275094 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 19:56:04.514585  275094 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 19:56:04.636276  275094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 19:56:04.649052  275094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 19:56:04.662461  275094 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 19:56:04.662522  275094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 19:56:04.670821  275094 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 19:56:04.670893  275094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 19:56:04.679437  275094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 19:56:04.687668  275094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 19:56:04.695902  275094 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 19:56:04.703552  275094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 19:56:04.711771  275094 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 19:56:04.724746  275094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 19:56:04.733035  275094 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 19:56:04.740628  275094 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 19:56:04.747780  275094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 19:56:04.867893  275094 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 19:56:05.056907  275094 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 19:56:05.056993  275094 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 19:56:05.060725  275094 start.go:574] Will wait 60s for crictl version
	I1227 19:56:05.060791  275094 ssh_runner.go:195] Run: which crictl
	I1227 19:56:05.064164  275094 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 19:56:05.087585  275094 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 19:56:05.087685  275094 ssh_runner.go:195] Run: crio --version
	I1227 19:56:05.115841  275094 ssh_runner.go:195] Run: crio --version
	I1227 19:56:05.148091  275094 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 19:56:05.150969  275094 cli_runner.go:164] Run: docker network inspect addons-686526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 19:56:05.166786  275094 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 19:56:05.170572  275094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 19:56:05.180287  275094 kubeadm.go:884] updating cluster {Name:addons-686526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-686526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 19:56:05.180410  275094 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 19:56:05.180474  275094 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 19:56:05.215862  275094 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 19:56:05.215887  275094 crio.go:433] Images already preloaded, skipping extraction
	I1227 19:56:05.215942  275094 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 19:56:05.240700  275094 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 19:56:05.240728  275094 cache_images.go:86] Images are preloaded, skipping loading
	I1227 19:56:05.240737  275094 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I1227 19:56:05.240822  275094 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-686526 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:addons-686526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 19:56:05.240906  275094 ssh_runner.go:195] Run: crio config
	I1227 19:56:05.297109  275094 cni.go:84] Creating CNI manager for ""
	I1227 19:56:05.297134  275094 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 19:56:05.297158  275094 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 19:56:05.297183  275094 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-686526 NodeName:addons-686526 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 19:56:05.297309  275094 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-686526"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 19:56:05.297385  275094 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 19:56:05.304723  275094 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 19:56:05.304790  275094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 19:56:05.311879  275094 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 19:56:05.323897  275094 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 19:56:05.335723  275094 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1227 19:56:05.348068  275094 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1227 19:56:05.351415  275094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 19:56:05.360727  275094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 19:56:05.479348  275094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 19:56:05.496095  275094 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526 for IP: 192.168.49.2
	I1227 19:56:05.496167  275094 certs.go:195] generating shared ca certs ...
	I1227 19:56:05.496198  275094 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:56:05.496375  275094 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 19:56:05.934132  275094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt ...
	I1227 19:56:05.934166  275094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt: {Name:mk4f57c0773191a191c04a56d019d315e648bda5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:56:05.934394  275094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key ...
	I1227 19:56:05.934410  275094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key: {Name:mk4134681bc4c6ae813fe3b763a66c30bcaf9f99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:56:05.934507  275094 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 19:56:06.136171  275094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt ...
	I1227 19:56:06.136204  275094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt: {Name:mkef6867437c788a21d059e1debb8a8a1ef30b26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:56:06.136373  275094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key ...
	I1227 19:56:06.136387  275094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key: {Name:mk5f95e026d864a14eb1b5a3166a466af97a9fc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:56:06.136468  275094 certs.go:257] generating profile certs ...
	I1227 19:56:06.136529  275094 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.key
	I1227 19:56:06.136546  275094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt with IP's: []
	I1227 19:56:06.505188  275094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt ...
	I1227 19:56:06.505237  275094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: {Name:mk9713587eadb873cdd08c8529080941a22fd4c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:56:06.505424  275094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.key ...
	I1227 19:56:06.505438  275094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.key: {Name:mkb5184a4cc590364c978bf3311dc4773713f73e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:56:06.505552  275094 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/apiserver.key.1c7d1701
	I1227 19:56:06.505573  275094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/apiserver.crt.1c7d1701 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1227 19:56:06.595212  275094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/apiserver.crt.1c7d1701 ...
	I1227 19:56:06.595244  275094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/apiserver.crt.1c7d1701: {Name:mk82ca231434e175562ab727f63fc13e1c9b042a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:56:06.595411  275094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/apiserver.key.1c7d1701 ...
	I1227 19:56:06.595429  275094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/apiserver.key.1c7d1701: {Name:mk9de522f873cdcd270dc01307ba5fcdecd844b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:56:06.595511  275094 certs.go:382] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/apiserver.crt.1c7d1701 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/apiserver.crt
	I1227 19:56:06.595591  275094 certs.go:386] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/apiserver.key.1c7d1701 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/apiserver.key
	I1227 19:56:06.595655  275094 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/proxy-client.key
	I1227 19:56:06.595676  275094 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/proxy-client.crt with IP's: []
	I1227 19:56:06.748074  275094 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/proxy-client.crt ...
	I1227 19:56:06.748105  275094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/proxy-client.crt: {Name:mkcee55b6cf647ecb038cecb9d60d3215b494875 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:56:06.748282  275094 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/proxy-client.key ...
	I1227 19:56:06.748345  275094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/proxy-client.key: {Name:mka5d6d677f3bda8b635959ed0812c007090e705 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:56:06.748567  275094 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 19:56:06.748618  275094 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 19:56:06.748649  275094 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 19:56:06.748686  275094 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 19:56:06.749210  275094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 19:56:06.766296  275094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 19:56:06.784208  275094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 19:56:06.802557  275094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 19:56:06.819395  275094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1227 19:56:06.836477  275094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 19:56:06.853743  275094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 19:56:06.870870  275094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 19:56:06.887729  275094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 19:56:06.904700  275094 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 19:56:06.917177  275094 ssh_runner.go:195] Run: openssl version
	I1227 19:56:06.923329  275094 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 19:56:06.930803  275094 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 19:56:06.938470  275094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 19:56:06.941980  275094 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 19:56:06.942104  275094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 19:56:06.984336  275094 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 19:56:06.992352  275094 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 19:56:07.000323  275094 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 19:56:07.004692  275094 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 19:56:07.004741  275094 kubeadm.go:401] StartCluster: {Name:addons-686526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-686526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 19:56:07.004863  275094 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:56:07.004944  275094 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:56:07.032825  275094 cri.go:96] found id: ""
	I1227 19:56:07.032943  275094 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 19:56:07.040870  275094 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 19:56:07.048268  275094 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 19:56:07.048362  275094 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 19:56:07.055438  275094 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 19:56:07.055458  275094 kubeadm.go:158] found existing configuration files:
	
	I1227 19:56:07.055557  275094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 19:56:07.063101  275094 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 19:56:07.063192  275094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 19:56:07.070414  275094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 19:56:07.077569  275094 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 19:56:07.077635  275094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 19:56:07.084654  275094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 19:56:07.091982  275094 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 19:56:07.092068  275094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 19:56:07.098992  275094 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 19:56:07.106239  275094 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 19:56:07.106304  275094 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 19:56:07.114719  275094 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 19:56:07.151157  275094 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 19:56:07.151345  275094 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 19:56:07.223721  275094 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 19:56:07.223797  275094 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 19:56:07.223840  275094 kubeadm.go:319] OS: Linux
	I1227 19:56:07.223892  275094 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 19:56:07.223945  275094 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 19:56:07.223996  275094 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 19:56:07.224048  275094 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 19:56:07.224099  275094 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 19:56:07.224153  275094 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 19:56:07.224202  275094 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 19:56:07.224254  275094 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 19:56:07.224305  275094 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 19:56:07.287910  275094 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 19:56:07.288074  275094 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 19:56:07.288194  275094 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 19:56:07.297177  275094 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 19:56:07.304063  275094 out.go:252]   - Generating certificates and keys ...
	I1227 19:56:07.304219  275094 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 19:56:07.304326  275094 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 19:56:07.680677  275094 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 19:56:07.732894  275094 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 19:56:08.260997  275094 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 19:56:08.515184  275094 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 19:56:08.660892  275094 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 19:56:08.661142  275094 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-686526 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1227 19:56:08.954592  275094 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 19:56:08.954855  275094 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-686526 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1227 19:56:09.282048  275094 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 19:56:09.624542  275094 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 19:56:09.709154  275094 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 19:56:09.709389  275094 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 19:56:09.868012  275094 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 19:56:09.964440  275094 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 19:56:10.095486  275094 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 19:56:10.484522  275094 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 19:56:10.790063  275094 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 19:56:10.790795  275094 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 19:56:10.793533  275094 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 19:56:10.797011  275094 out.go:252]   - Booting up control plane ...
	I1227 19:56:10.797120  275094 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 19:56:10.797201  275094 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 19:56:10.797267  275094 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 19:56:10.812386  275094 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 19:56:10.812499  275094 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 19:56:10.822016  275094 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 19:56:10.822116  275094 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 19:56:10.822392  275094 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 19:56:10.963265  275094 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 19:56:10.963392  275094 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 19:56:11.967871  275094 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002195936s
	I1227 19:56:11.968889  275094 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 19:56:11.968979  275094 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1227 19:56:11.969068  275094 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 19:56:11.969147  275094 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 19:56:13.485578  275094 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.516154059s
	I1227 19:56:15.225269  275094 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.256281529s
	I1227 19:56:16.971680  275094 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.002521193s
	I1227 19:56:17.006188  275094 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 19:56:17.020765  275094 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 19:56:17.036850  275094 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 19:56:17.037086  275094 kubeadm.go:319] [mark-control-plane] Marking the node addons-686526 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 19:56:17.053271  275094 kubeadm.go:319] [bootstrap-token] Using token: kclaob.drxzsis0ojc2a14v
	I1227 19:56:17.056412  275094 out.go:252]   - Configuring RBAC rules ...
	I1227 19:56:17.056552  275094 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 19:56:17.065788  275094 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 19:56:17.075246  275094 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 19:56:17.083177  275094 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 19:56:17.087264  275094 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 19:56:17.091306  275094 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 19:56:17.377987  275094 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 19:56:17.805030  275094 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 19:56:18.379762  275094 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 19:56:18.380731  275094 kubeadm.go:319] 
	I1227 19:56:18.380813  275094 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 19:56:18.380826  275094 kubeadm.go:319] 
	I1227 19:56:18.380904  275094 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 19:56:18.380912  275094 kubeadm.go:319] 
	I1227 19:56:18.380938  275094 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 19:56:18.381009  275094 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 19:56:18.381069  275094 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 19:56:18.381077  275094 kubeadm.go:319] 
	I1227 19:56:18.381130  275094 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 19:56:18.381139  275094 kubeadm.go:319] 
	I1227 19:56:18.381187  275094 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 19:56:18.381195  275094 kubeadm.go:319] 
	I1227 19:56:18.381246  275094 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 19:56:18.381324  275094 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 19:56:18.381395  275094 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 19:56:18.381403  275094 kubeadm.go:319] 
	I1227 19:56:18.381500  275094 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 19:56:18.381583  275094 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 19:56:18.381590  275094 kubeadm.go:319] 
	I1227 19:56:18.381674  275094 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kclaob.drxzsis0ojc2a14v \
	I1227 19:56:18.381780  275094 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ff29328d1e0d612c7979c16c69d6042f5f31e931d111cc12c8320ed4e4ab5152 \
	I1227 19:56:18.381897  275094 kubeadm.go:319] 	--control-plane 
	I1227 19:56:18.381909  275094 kubeadm.go:319] 
	I1227 19:56:18.382002  275094 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 19:56:18.382013  275094 kubeadm.go:319] 
	I1227 19:56:18.382095  275094 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kclaob.drxzsis0ojc2a14v \
	I1227 19:56:18.382204  275094 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ff29328d1e0d612c7979c16c69d6042f5f31e931d111cc12c8320ed4e4ab5152 
	I1227 19:56:18.385802  275094 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 19:56:18.386215  275094 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 19:56:18.386322  275094 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 19:56:18.386342  275094 cni.go:84] Creating CNI manager for ""
	I1227 19:56:18.386349  275094 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 19:56:18.389511  275094 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 19:56:18.392378  275094 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 19:56:18.396481  275094 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 19:56:18.396502  275094 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 19:56:18.409239  275094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 19:56:18.674148  275094 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 19:56:18.674269  275094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 19:56:18.674363  275094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-686526 minikube.k8s.io/updated_at=2025_12_27T19_56_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562 minikube.k8s.io/name=addons-686526 minikube.k8s.io/primary=true
	I1227 19:56:18.810808  275094 ops.go:34] apiserver oom_adj: -16
	I1227 19:56:18.810895  275094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 19:56:19.311578  275094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 19:56:19.811987  275094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 19:56:20.311508  275094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 19:56:20.811692  275094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 19:56:21.311855  275094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 19:56:21.811675  275094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 19:56:22.311298  275094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 19:56:22.811690  275094 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 19:56:22.917054  275094 kubeadm.go:1114] duration metric: took 4.242826754s to wait for elevateKubeSystemPrivileges
	I1227 19:56:22.917086  275094 kubeadm.go:403] duration metric: took 15.912347535s to StartCluster
	I1227 19:56:22.917106  275094 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:56:22.917229  275094 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 19:56:22.917646  275094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:56:22.917842  275094 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 19:56:22.917992  275094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 19:56:22.918242  275094 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:56:22.918284  275094 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1227 19:56:22.918403  275094 addons.go:70] Setting yakd=true in profile "addons-686526"
	I1227 19:56:22.918422  275094 addons.go:239] Setting addon yakd=true in "addons-686526"
	I1227 19:56:22.918445  275094 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:56:22.918998  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:56:22.919301  275094 addons.go:70] Setting inspektor-gadget=true in profile "addons-686526"
	I1227 19:56:22.919323  275094 addons.go:239] Setting addon inspektor-gadget=true in "addons-686526"
	I1227 19:56:22.919345  275094 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:56:22.919612  275094 addons.go:70] Setting metrics-server=true in profile "addons-686526"
	I1227 19:56:22.919633  275094 addons.go:239] Setting addon metrics-server=true in "addons-686526"
	I1227 19:56:22.919665  275094 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:56:22.919745  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:56:22.920110  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:56:22.922356  275094 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-686526"
	I1227 19:56:22.922429  275094 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-686526"
	I1227 19:56:22.922537  275094 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:56:22.923914  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:56:22.924205  275094 addons.go:70] Setting registry=true in profile "addons-686526"
	I1227 19:56:22.924231  275094 addons.go:239] Setting addon registry=true in "addons-686526"
	I1227 19:56:22.924259  275094 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:56:22.924691  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:56:22.930358  275094 out.go:179] * Verifying Kubernetes components...
	I1227 19:56:22.933596  275094 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-686526"
	I1227 19:56:22.933695  275094 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-686526"
	I1227 19:56:22.933783  275094 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:56:22.934372  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:56:22.934804  275094 addons.go:70] Setting cloud-spanner=true in profile "addons-686526"
	I1227 19:56:22.934833  275094 addons.go:239] Setting addon cloud-spanner=true in "addons-686526"
	I1227 19:56:22.934858  275094 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:56:22.935277  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:56:22.941652  275094 addons.go:70] Setting registry-creds=true in profile "addons-686526"
	I1227 19:56:22.941686  275094 addons.go:239] Setting addon registry-creds=true in "addons-686526"
	I1227 19:56:22.941723  275094 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:56:22.942180  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:56:22.944173  275094 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-686526"
	I1227 19:56:22.944236  275094 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-686526"
	I1227 19:56:22.944264  275094 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:56:22.944713  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:56:22.949721  275094 addons.go:70] Setting storage-provisioner=true in profile "addons-686526"
	I1227 19:56:22.949813  275094 addons.go:239] Setting addon storage-provisioner=true in "addons-686526"
	I1227 19:56:22.949881  275094 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:56:22.950584  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:56:22.964804  275094 addons.go:70] Setting default-storageclass=true in profile "addons-686526"
	I1227 19:56:22.964850  275094 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-686526"
	I1227 19:56:22.965173  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:56:22.983542  275094 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-686526"
	I1227 19:56:22.983621  275094 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-686526"
	I1227 19:56:22.984018  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:56:22.985590  275094 addons.go:70] Setting gcp-auth=true in profile "addons-686526"
	I1227 19:56:22.985619  275094 mustload.go:66] Loading cluster: addons-686526
	I1227 19:56:22.985793  275094 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:56:22.986018  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:56:23.003972  275094 addons.go:70] Setting ingress=true in profile "addons-686526"
	I1227 19:56:23.004007  275094 addons.go:239] Setting addon ingress=true in "addons-686526"
	I1227 19:56:23.004054  275094 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:56:23.004586  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:56:23.004824  275094 addons.go:70] Setting volcano=true in profile "addons-686526"
	I1227 19:56:23.004882  275094 addons.go:239] Setting addon volcano=true in "addons-686526"
	I1227 19:56:23.004923  275094 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:56:23.006836  275094 addons.go:70] Setting ingress-dns=true in profile "addons-686526"
	I1227 19:56:23.006917  275094 addons.go:239] Setting addon ingress-dns=true in "addons-686526"
	I1227 19:56:23.006991  275094 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:56:23.013948  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:56:23.017593  275094 addons.go:70] Setting volumesnapshots=true in profile "addons-686526"
	I1227 19:56:23.017689  275094 addons.go:239] Setting addon volumesnapshots=true in "addons-686526"
	I1227 19:56:23.017796  275094 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:56:23.034111  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:56:23.119361  275094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 19:56:23.120611  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:56:23.135112  275094 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1227 19:56:23.138981  275094 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1227 19:56:23.139005  275094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1227 19:56:23.139070  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:23.160611  275094 out.go:179]   - Using image docker.io/registry:3.0.0
	I1227 19:56:23.160735  275094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1227 19:56:23.162844  275094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 19:56:23.167509  275094 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1227 19:56:23.177799  275094 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.46
	I1227 19:56:23.182655  275094 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1227 19:56:23.182809  275094 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1227 19:56:23.182823  275094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1227 19:56:23.182893  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:23.190237  275094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1227 19:56:23.193563  275094 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1227 19:56:23.179713  275094 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1227 19:56:23.193676  275094 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1227 19:56:23.204051  275094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1227 19:56:23.204903  275094 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1227 19:56:23.218143  275094 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1227 19:56:23.218233  275094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1227 19:56:23.218329  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:23.221918  275094 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1227 19:56:23.221998  275094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1227 19:56:23.222112  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:23.229567  275094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1227 19:56:23.229742  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:23.241318  275094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1227 19:56:23.243469  275094 addons.go:239] Setting addon default-storageclass=true in "addons-686526"
	I1227 19:56:23.243507  275094 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:56:23.248977  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:56:23.259882  275094 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 19:56:23.261716  275094 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1227 19:56:23.261822  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:23.265172  275094 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1227 19:56:23.270308  275094 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-686526"
	I1227 19:56:23.270351  275094 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:56:23.270807  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:56:23.295591  275094 out.go:179]   - Using image ghcr.io/manusa/yakd:0.0.6
	I1227 19:56:23.298634  275094 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1227 19:56:23.298672  275094 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1227 19:56:23.298751  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:23.311833  275094 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 19:56:23.311854  275094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 19:56:23.311924  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:23.335085  275094 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:56:23.340246  275094 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1227 19:56:23.340268  275094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1227 19:56:23.340331  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:23.374958  275094 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1227 19:56:23.377356  275094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1227 19:56:23.383325  275094 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1227 19:56:23.386273  275094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1227 19:56:23.393619  275094 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1227 19:56:23.393909  275094 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	W1227 19:56:23.394196  275094 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1227 19:56:23.394520  275094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1227 19:56:23.434003  275094 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1227 19:56:23.434041  275094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1227 19:56:23.434163  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:23.438176  275094 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1227 19:56:23.438252  275094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1227 19:56:23.438356  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:23.461527  275094 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1227 19:56:23.469763  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:56:23.469890  275094 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1227 19:56:23.469922  275094 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1227 19:56:23.470035  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:23.470492  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:56:23.505890  275094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1227 19:56:23.506203  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:56:23.507524  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:56:23.508327  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:56:23.517254  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:56:23.520393  275094 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 19:56:23.520446  275094 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 19:56:23.520552  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:23.520980  275094 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1227 19:56:23.521037  275094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16257 bytes)
	I1227 19:56:23.521116  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:23.526373  275094 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1227 19:56:23.529277  275094 out.go:179]   - Using image docker.io/busybox:stable
	I1227 19:56:23.532445  275094 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1227 19:56:23.532473  275094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1227 19:56:23.532535  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:23.545763  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:56:23.576301  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:56:23.580827  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:56:23.593614  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:56:23.636096  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:56:23.651670  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:56:23.653393  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	W1227 19:56:23.665791  275094 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1227 19:56:23.665849  275094 retry.go:84] will retry after 100ms: ssh: handshake failed: EOF
	I1227 19:56:23.669376  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:56:23.679508  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:56:23.715045  275094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 19:56:24.183404  275094 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1227 19:56:24.183431  275094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1227 19:56:24.234260  275094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 19:56:24.251722  275094 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1227 19:56:24.251747  275094 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1227 19:56:24.321227  275094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1227 19:56:24.357690  275094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1227 19:56:24.359704  275094 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1227 19:56:24.359737  275094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1227 19:56:24.383250  275094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1227 19:56:24.401386  275094 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1227 19:56:24.401412  275094 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1227 19:56:24.420806  275094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1227 19:56:24.429815  275094 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1227 19:56:24.429842  275094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1227 19:56:24.440247  275094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1227 19:56:24.440709  275094 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1227 19:56:24.440727  275094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1227 19:56:24.460267  275094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1227 19:56:24.491142  275094 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1227 19:56:24.491172  275094 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1227 19:56:24.495082  275094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 19:56:24.506313  275094 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1227 19:56:24.506345  275094 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1227 19:56:24.534388  275094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1227 19:56:24.539199  275094 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1227 19:56:24.539225  275094 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1227 19:56:24.542374  275094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1227 19:56:24.665527  275094 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1227 19:56:24.665556  275094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1227 19:56:24.759129  275094 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1227 19:56:24.759152  275094 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1227 19:56:24.793602  275094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1227 19:56:24.835091  275094 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1227 19:56:24.835116  275094 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1227 19:56:24.863102  275094 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.700226212s)
	I1227 19:56:24.863132  275094 start.go:987] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1227 19:56:24.863194  275094 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.148118078s)
	I1227 19:56:24.863963  275094 node_ready.go:35] waiting up to 6m0s for node "addons-686526" to be "Ready" ...
	I1227 19:56:24.979330  275094 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1227 19:56:24.979354  275094 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1227 19:56:25.110917  275094 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1227 19:56:25.110944  275094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1227 19:56:25.267582  275094 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1227 19:56:25.267608  275094 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1227 19:56:25.272742  275094 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1227 19:56:25.272766  275094 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1227 19:56:25.295176  275094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1227 19:56:25.369180  275094 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-686526" context rescaled to 1 replicas
	I1227 19:56:25.378335  275094 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1227 19:56:25.378368  275094 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1227 19:56:25.506870  275094 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1227 19:56:25.506894  275094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1227 19:56:25.557373  275094 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1227 19:56:25.557405  275094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2013 bytes)
	I1227 19:56:25.777649  275094 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1227 19:56:25.777674  275094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1227 19:56:25.834308  275094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1227 19:56:26.031220  275094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.796917311s)
	I1227 19:56:26.047938  275094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1227 19:56:26.128408  275094 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1227 19:56:26.128436  275094 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1227 19:56:26.543760  275094 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1227 19:56:26.543838  275094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	W1227 19:56:26.877545  275094 node_ready.go:57] node "addons-686526" has "Ready":"False" status (will retry)
	I1227 19:56:26.946126  275094 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1227 19:56:26.946201  275094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1227 19:56:27.161310  275094 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1227 19:56:27.161386  275094 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1227 19:56:27.458890  275094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1227 19:56:28.883639  275094 node_ready.go:57] node "addons-686526" has "Ready":"False" status (will retry)
	I1227 19:56:29.155347  275094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.797620776s)
	I1227 19:56:29.155631  275094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.772346581s)
	I1227 19:56:29.155706  275094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.834452419s)
	I1227 19:56:30.159487  275094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.738647823s)
	I1227 19:56:30.159525  275094 addons.go:495] Verifying addon ingress=true in "addons-686526"
	I1227 19:56:30.159579  275094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.719304976s)
	I1227 19:56:30.159608  275094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.699315297s)
	I1227 19:56:30.159643  275094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.664525056s)
	I1227 19:56:30.159665  275094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.625260023s)
	I1227 19:56:30.159700  275094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.617305323s)
	I1227 19:56:30.159722  275094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.366094574s)
	I1227 19:56:30.159734  275094 addons.go:495] Verifying addon registry=true in "addons-686526"
	I1227 19:56:30.160082  275094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.864871419s)
	I1227 19:56:30.160108  275094 addons.go:495] Verifying addon metrics-server=true in "addons-686526"
	I1227 19:56:30.160156  275094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.325799356s)
	W1227 19:56:30.160185  275094 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1227 19:56:30.160209  275094 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1227 19:56:30.160280  275094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.112309324s)
	I1227 19:56:30.162993  275094 out.go:179] * Verifying registry addon...
	I1227 19:56:30.163123  275094 out.go:179] * Verifying ingress addon...
	I1227 19:56:30.165904  275094 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1227 19:56:30.165994  275094 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-686526 service yakd-dashboard -n yakd-dashboard
	
	I1227 19:56:30.169626  275094 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1227 19:56:30.175013  275094 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1227 19:56:30.175040  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:30.179347  275094 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1227 19:56:30.179374  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:30.434285  275094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1227 19:56:30.459654  275094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.000670555s)
	I1227 19:56:30.459689  275094 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-686526"
	I1227 19:56:30.461988  275094 out.go:179] * Verifying csi-hostpath-driver addon...
	I1227 19:56:30.465656  275094 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1227 19:56:30.480299  275094 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1227 19:56:30.480325  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:30.669925  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:30.672937  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:30.969742  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:30.999163  275094 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1227 19:56:30.999244  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:31.016831  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:56:31.127144  275094 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1227 19:56:31.140153  275094 addons.go:239] Setting addon gcp-auth=true in "addons-686526"
	I1227 19:56:31.140205  275094 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:56:31.140691  275094 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:56:31.157967  275094 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1227 19:56:31.158023  275094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:56:31.170508  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:31.175383  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:31.181100  275094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	W1227 19:56:31.366723  275094 node_ready.go:57] node "addons-686526" has "Ready":"False" status (will retry)
	I1227 19:56:31.468421  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:31.669422  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:31.672933  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:31.969923  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:32.168665  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:32.173334  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:32.468871  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:32.669488  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:32.673174  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:32.969346  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:33.129591  275094 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.971589653s)
	I1227 19:56:33.129600  275094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.695262055s)
	I1227 19:56:33.132872  275094 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1227 19:56:33.135846  275094 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1227 19:56:33.138634  275094 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1227 19:56:33.138663  275094 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1227 19:56:33.154327  275094 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1227 19:56:33.154356  275094 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1227 19:56:33.167276  275094 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1227 19:56:33.167297  275094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1227 19:56:33.169330  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:33.173397  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:33.181422  275094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	W1227 19:56:33.367549  275094 node_ready.go:57] node "addons-686526" has "Ready":"False" status (will retry)
	I1227 19:56:33.468578  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:33.688385  275094 addons.go:495] Verifying addon gcp-auth=true in "addons-686526"
	I1227 19:56:33.690277  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:33.690972  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:33.691467  275094 out.go:179] * Verifying gcp-auth addon...
	I1227 19:56:33.694997  275094 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1227 19:56:33.698728  275094 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1227 19:56:33.698749  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:33.969232  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:34.168951  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:34.172362  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:34.197809  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:34.468346  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:34.669218  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:34.672589  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:34.698089  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:34.969078  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:35.169098  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:35.172617  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:35.198704  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:35.469331  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:35.669100  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:35.672395  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:35.698658  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1227 19:56:35.867676  275094 node_ready.go:57] node "addons-686526" has "Ready":"False" status (will retry)
	I1227 19:56:35.969596  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:36.169653  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:36.172215  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:36.198126  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:36.469875  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:36.669106  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:36.672706  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:36.698588  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:36.969534  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:37.169994  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:37.179018  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:37.272204  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:37.468570  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:37.669406  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:37.672791  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:37.698587  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1227 19:56:37.868222  275094 node_ready.go:57] node "addons-686526" has "Ready":"False" status (will retry)
	I1227 19:56:37.969112  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:38.191003  275094 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1227 19:56:38.191074  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:38.191464  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:38.217107  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:38.390991  275094 node_ready.go:49] node "addons-686526" is "Ready"
	I1227 19:56:38.391073  275094 node_ready.go:38] duration metric: took 13.527084852s for node "addons-686526" to be "Ready" ...
	I1227 19:56:38.391102  275094 api_server.go:52] waiting for apiserver process to appear ...
	I1227 19:56:38.391201  275094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 19:56:38.409820  275094 api_server.go:72] duration metric: took 15.491940794s to wait for apiserver process to appear ...
	I1227 19:56:38.409846  275094 api_server.go:88] waiting for apiserver healthz status ...
	I1227 19:56:38.409866  275094 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 19:56:38.450538  275094 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1227 19:56:38.457321  275094 api_server.go:141] control plane version: v1.35.0
	I1227 19:56:38.457353  275094 api_server.go:131] duration metric: took 47.499462ms to wait for apiserver health ...
	I1227 19:56:38.457363  275094 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 19:56:38.539226  275094 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1227 19:56:38.539253  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:38.540954  275094 system_pods.go:59] 19 kube-system pods found
	I1227 19:56:38.540996  275094 system_pods.go:61] "coredns-7d764666f9-xqfvw" [d6221fbb-e7fa-491c-bad1-e0dc6b2cc120] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 19:56:38.541023  275094 system_pods.go:61] "csi-hostpath-attacher-0" [aefd3300-8472-4c92-b31e-0a0ec8f6cae9] Pending
	I1227 19:56:38.541040  275094 system_pods.go:61] "csi-hostpath-resizer-0" [9f562371-0e1f-4eff-bfc4-d04f4dff6cec] Pending
	I1227 19:56:38.541045  275094 system_pods.go:61] "csi-hostpathplugin-zr686" [d16a504d-c062-441a-8106-d5bc41df8a94] Pending
	I1227 19:56:38.541049  275094 system_pods.go:61] "etcd-addons-686526" [5c726dfb-85d9-471e-8a72-d51a6e8d94d1] Running
	I1227 19:56:38.541056  275094 system_pods.go:61] "kindnet-5dhlc" [afa57a87-cd95-42f4-886f-364edb2babb7] Running
	I1227 19:56:38.541067  275094 system_pods.go:61] "kube-apiserver-addons-686526" [f305c82e-4b1a-4628-9db3-d90f5d700050] Running
	I1227 19:56:38.541071  275094 system_pods.go:61] "kube-controller-manager-addons-686526" [1256c06a-117d-4e03-8856-2b55d85fcd06] Running
	I1227 19:56:38.541078  275094 system_pods.go:61] "kube-ingress-dns-minikube" [b39567e5-56ae-4649-beed-ea5697c90e51] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1227 19:56:38.541103  275094 system_pods.go:61] "kube-proxy-7n5r2" [6b1d22df-ba68-4883-bbbd-58bb93d32a67] Running
	I1227 19:56:38.541130  275094 system_pods.go:61] "kube-scheduler-addons-686526" [09893500-101f-4c6b-80b4-c14b207d8526] Running
	I1227 19:56:38.541137  275094 system_pods.go:61] "metrics-server-5778bb4788-vd5cx" [666576fc-47dd-48ca-96e6-f651beb8216a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 19:56:38.541147  275094 system_pods.go:61] "nvidia-device-plugin-daemonset-5d8kz" [34367370-48b0-4086-b7e4-1c176da80c81] Pending
	I1227 19:56:38.541153  275094 system_pods.go:61] "registry-788cd7d5bc-6s25q" [f0b1919e-0cc3-4360-be8f-4ce4ccfcb1b4] Pending
	I1227 19:56:38.541158  275094 system_pods.go:61] "registry-creds-567fb78d95-djlhq" [9004b351-6e24-4a99-8837-7568ccfbce17] Pending
	I1227 19:56:38.541166  275094 system_pods.go:61] "registry-proxy-x4f62" [8d1ee49f-0706-4da1-bb4a-b08c535e2797] Pending
	I1227 19:56:38.541175  275094 system_pods.go:61] "snapshot-controller-6588d87457-gkh4d" [c0dcf761-9384-4953-96b0-89090f68eaca] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 19:56:38.541196  275094 system_pods.go:61] "snapshot-controller-6588d87457-scj48" [c987d7a8-93b6-4683-af32-41e952470c16] Pending
	I1227 19:56:38.541222  275094 system_pods.go:61] "storage-provisioner" [6fb160ef-4e7a-437d-a613-00bd53175fc8] Pending
	I1227 19:56:38.541229  275094 system_pods.go:74] duration metric: took 83.859797ms to wait for pod list to return data ...
	I1227 19:56:38.541243  275094 default_sa.go:34] waiting for default service account to be created ...
	I1227 19:56:38.545251  275094 default_sa.go:45] found service account: "default"
	I1227 19:56:38.545277  275094 default_sa.go:55] duration metric: took 4.02702ms for default service account to be created ...
	I1227 19:56:38.545288  275094 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 19:56:38.582005  275094 system_pods.go:86] 19 kube-system pods found
	I1227 19:56:38.582047  275094 system_pods.go:89] "coredns-7d764666f9-xqfvw" [d6221fbb-e7fa-491c-bad1-e0dc6b2cc120] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 19:56:38.582055  275094 system_pods.go:89] "csi-hostpath-attacher-0" [aefd3300-8472-4c92-b31e-0a0ec8f6cae9] Pending
	I1227 19:56:38.582060  275094 system_pods.go:89] "csi-hostpath-resizer-0" [9f562371-0e1f-4eff-bfc4-d04f4dff6cec] Pending
	I1227 19:56:38.582093  275094 system_pods.go:89] "csi-hostpathplugin-zr686" [d16a504d-c062-441a-8106-d5bc41df8a94] Pending
	I1227 19:56:38.582102  275094 system_pods.go:89] "etcd-addons-686526" [5c726dfb-85d9-471e-8a72-d51a6e8d94d1] Running
	I1227 19:56:38.582107  275094 system_pods.go:89] "kindnet-5dhlc" [afa57a87-cd95-42f4-886f-364edb2babb7] Running
	I1227 19:56:38.582112  275094 system_pods.go:89] "kube-apiserver-addons-686526" [f305c82e-4b1a-4628-9db3-d90f5d700050] Running
	I1227 19:56:38.582125  275094 system_pods.go:89] "kube-controller-manager-addons-686526" [1256c06a-117d-4e03-8856-2b55d85fcd06] Running
	I1227 19:56:38.582134  275094 system_pods.go:89] "kube-ingress-dns-minikube" [b39567e5-56ae-4649-beed-ea5697c90e51] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1227 19:56:38.582145  275094 system_pods.go:89] "kube-proxy-7n5r2" [6b1d22df-ba68-4883-bbbd-58bb93d32a67] Running
	I1227 19:56:38.582164  275094 system_pods.go:89] "kube-scheduler-addons-686526" [09893500-101f-4c6b-80b4-c14b207d8526] Running
	I1227 19:56:38.582186  275094 system_pods.go:89] "metrics-server-5778bb4788-vd5cx" [666576fc-47dd-48ca-96e6-f651beb8216a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 19:56:38.582197  275094 system_pods.go:89] "nvidia-device-plugin-daemonset-5d8kz" [34367370-48b0-4086-b7e4-1c176da80c81] Pending
	I1227 19:56:38.582202  275094 system_pods.go:89] "registry-788cd7d5bc-6s25q" [f0b1919e-0cc3-4360-be8f-4ce4ccfcb1b4] Pending
	I1227 19:56:38.582206  275094 system_pods.go:89] "registry-creds-567fb78d95-djlhq" [9004b351-6e24-4a99-8837-7568ccfbce17] Pending
	I1227 19:56:38.582210  275094 system_pods.go:89] "registry-proxy-x4f62" [8d1ee49f-0706-4da1-bb4a-b08c535e2797] Pending
	I1227 19:56:38.582225  275094 system_pods.go:89] "snapshot-controller-6588d87457-gkh4d" [c0dcf761-9384-4953-96b0-89090f68eaca] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 19:56:38.582231  275094 system_pods.go:89] "snapshot-controller-6588d87457-scj48" [c987d7a8-93b6-4683-af32-41e952470c16] Pending
	I1227 19:56:38.582236  275094 system_pods.go:89] "storage-provisioner" [6fb160ef-4e7a-437d-a613-00bd53175fc8] Pending
	I1227 19:56:38.582267  275094 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1227 19:56:38.673012  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:38.674971  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:38.709950  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:38.828367  275094 system_pods.go:86] 19 kube-system pods found
	I1227 19:56:38.828455  275094 system_pods.go:89] "coredns-7d764666f9-xqfvw" [d6221fbb-e7fa-491c-bad1-e0dc6b2cc120] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 19:56:38.828481  275094 system_pods.go:89] "csi-hostpath-attacher-0" [aefd3300-8472-4c92-b31e-0a0ec8f6cae9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1227 19:56:38.828520  275094 system_pods.go:89] "csi-hostpath-resizer-0" [9f562371-0e1f-4eff-bfc4-d04f4dff6cec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1227 19:56:38.828548  275094 system_pods.go:89] "csi-hostpathplugin-zr686" [d16a504d-c062-441a-8106-d5bc41df8a94] Pending
	I1227 19:56:38.828572  275094 system_pods.go:89] "etcd-addons-686526" [5c726dfb-85d9-471e-8a72-d51a6e8d94d1] Running
	I1227 19:56:38.828597  275094 system_pods.go:89] "kindnet-5dhlc" [afa57a87-cd95-42f4-886f-364edb2babb7] Running
	I1227 19:56:38.828630  275094 system_pods.go:89] "kube-apiserver-addons-686526" [f305c82e-4b1a-4628-9db3-d90f5d700050] Running
	I1227 19:56:38.828653  275094 system_pods.go:89] "kube-controller-manager-addons-686526" [1256c06a-117d-4e03-8856-2b55d85fcd06] Running
	I1227 19:56:38.828692  275094 system_pods.go:89] "kube-ingress-dns-minikube" [b39567e5-56ae-4649-beed-ea5697c90e51] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1227 19:56:38.828715  275094 system_pods.go:89] "kube-proxy-7n5r2" [6b1d22df-ba68-4883-bbbd-58bb93d32a67] Running
	I1227 19:56:38.828737  275094 system_pods.go:89] "kube-scheduler-addons-686526" [09893500-101f-4c6b-80b4-c14b207d8526] Running
	I1227 19:56:38.828762  275094 system_pods.go:89] "metrics-server-5778bb4788-vd5cx" [666576fc-47dd-48ca-96e6-f651beb8216a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 19:56:38.828795  275094 system_pods.go:89] "nvidia-device-plugin-daemonset-5d8kz" [34367370-48b0-4086-b7e4-1c176da80c81] Pending
	I1227 19:56:38.828822  275094 system_pods.go:89] "registry-788cd7d5bc-6s25q" [f0b1919e-0cc3-4360-be8f-4ce4ccfcb1b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1227 19:56:38.828847  275094 system_pods.go:89] "registry-creds-567fb78d95-djlhq" [9004b351-6e24-4a99-8837-7568ccfbce17] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1227 19:56:38.828870  275094 system_pods.go:89] "registry-proxy-x4f62" [8d1ee49f-0706-4da1-bb4a-b08c535e2797] Pending
	I1227 19:56:38.828904  275094 system_pods.go:89] "snapshot-controller-6588d87457-gkh4d" [c0dcf761-9384-4953-96b0-89090f68eaca] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 19:56:38.828933  275094 system_pods.go:89] "snapshot-controller-6588d87457-scj48" [c987d7a8-93b6-4683-af32-41e952470c16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 19:56:38.828956  275094 system_pods.go:89] "storage-provisioner" [6fb160ef-4e7a-437d-a613-00bd53175fc8] Pending
	I1227 19:56:38.970135  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:39.166011  275094 system_pods.go:86] 19 kube-system pods found
	I1227 19:56:39.166287  275094 system_pods.go:89] "coredns-7d764666f9-xqfvw" [d6221fbb-e7fa-491c-bad1-e0dc6b2cc120] Running
	I1227 19:56:39.166325  275094 system_pods.go:89] "csi-hostpath-attacher-0" [aefd3300-8472-4c92-b31e-0a0ec8f6cae9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1227 19:56:39.166348  275094 system_pods.go:89] "csi-hostpath-resizer-0" [9f562371-0e1f-4eff-bfc4-d04f4dff6cec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1227 19:56:39.166376  275094 system_pods.go:89] "csi-hostpathplugin-zr686" [d16a504d-c062-441a-8106-d5bc41df8a94] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1227 19:56:39.166408  275094 system_pods.go:89] "etcd-addons-686526" [5c726dfb-85d9-471e-8a72-d51a6e8d94d1] Running
	I1227 19:56:39.166436  275094 system_pods.go:89] "kindnet-5dhlc" [afa57a87-cd95-42f4-886f-364edb2babb7] Running
	I1227 19:56:39.166460  275094 system_pods.go:89] "kube-apiserver-addons-686526" [f305c82e-4b1a-4628-9db3-d90f5d700050] Running
	I1227 19:56:39.166480  275094 system_pods.go:89] "kube-controller-manager-addons-686526" [1256c06a-117d-4e03-8856-2b55d85fcd06] Running
	I1227 19:56:39.166516  275094 system_pods.go:89] "kube-ingress-dns-minikube" [b39567e5-56ae-4649-beed-ea5697c90e51] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1227 19:56:39.166542  275094 system_pods.go:89] "kube-proxy-7n5r2" [6b1d22df-ba68-4883-bbbd-58bb93d32a67] Running
	I1227 19:56:39.166567  275094 system_pods.go:89] "kube-scheduler-addons-686526" [09893500-101f-4c6b-80b4-c14b207d8526] Running
	I1227 19:56:39.166594  275094 system_pods.go:89] "metrics-server-5778bb4788-vd5cx" [666576fc-47dd-48ca-96e6-f651beb8216a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 19:56:39.166628  275094 system_pods.go:89] "nvidia-device-plugin-daemonset-5d8kz" [34367370-48b0-4086-b7e4-1c176da80c81] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1227 19:56:39.166659  275094 system_pods.go:89] "registry-788cd7d5bc-6s25q" [f0b1919e-0cc3-4360-be8f-4ce4ccfcb1b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1227 19:56:39.166690  275094 system_pods.go:89] "registry-creds-567fb78d95-djlhq" [9004b351-6e24-4a99-8837-7568ccfbce17] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1227 19:56:39.166716  275094 system_pods.go:89] "registry-proxy-x4f62" [8d1ee49f-0706-4da1-bb4a-b08c535e2797] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1227 19:56:39.166747  275094 system_pods.go:89] "snapshot-controller-6588d87457-gkh4d" [c0dcf761-9384-4953-96b0-89090f68eaca] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 19:56:39.166776  275094 system_pods.go:89] "snapshot-controller-6588d87457-scj48" [c987d7a8-93b6-4683-af32-41e952470c16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 19:56:39.166800  275094 system_pods.go:89] "storage-provisioner" [6fb160ef-4e7a-437d-a613-00bd53175fc8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 19:56:39.166845  275094 system_pods.go:126] duration metric: took 621.540556ms to wait for k8s-apps to be running ...
	I1227 19:56:39.166872  275094 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 19:56:39.166957  275094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 19:56:39.188919  275094 system_svc.go:56] duration metric: took 22.038198ms WaitForService to wait for kubelet
	I1227 19:56:39.188993  275094 kubeadm.go:587] duration metric: took 16.271118237s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 19:56:39.189029  275094 node_conditions.go:102] verifying NodePressure condition ...
	I1227 19:56:39.192061  275094 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 19:56:39.192134  275094 node_conditions.go:123] node cpu capacity is 2
	I1227 19:56:39.192163  275094 node_conditions.go:105] duration metric: took 3.11277ms to run NodePressure ...
	I1227 19:56:39.192189  275094 start.go:242] waiting for startup goroutines ...
	I1227 19:56:39.258197  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:39.258369  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:39.258964  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:39.469877  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:39.668559  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:39.672208  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:39.698508  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:39.969046  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:40.170410  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:40.180692  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:40.198667  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:40.468952  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:40.669221  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:40.672691  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:40.698504  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:40.969942  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:41.174122  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:41.174349  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:41.199527  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:41.469027  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:41.668869  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:41.672838  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:41.698921  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:41.969175  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:42.172010  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:42.174605  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:42.199890  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:42.469751  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:42.668835  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:42.673248  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:42.698809  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:42.969472  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:43.169328  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:43.173194  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:43.198317  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:43.469631  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:43.670588  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:43.673219  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:43.698135  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:43.969208  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:44.169112  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:44.172465  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:44.198437  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:44.469321  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:44.669115  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:44.672763  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:44.698213  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:44.969551  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:45.170868  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:45.175880  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:45.202859  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:45.469049  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:45.669708  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:45.672758  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:45.698835  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:45.969229  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:46.169357  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:46.173372  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:46.198269  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:46.470333  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:46.669428  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:46.673232  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:46.698099  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:46.969391  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:47.169474  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:47.177436  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:47.198539  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:47.470011  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:47.669228  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:47.673221  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:47.698401  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:47.970776  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:48.171492  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:48.173031  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:48.198703  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:48.473612  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:48.669045  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:48.672750  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:48.698724  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:48.969763  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:49.168659  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:49.172266  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:49.198539  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:49.469841  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:49.669062  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:49.672736  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:49.698553  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:49.970384  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:50.170546  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:50.178813  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:50.204310  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:50.470414  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:50.675065  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:50.676017  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:50.698229  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:50.972347  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:51.174722  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:51.175223  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:51.199117  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:51.470204  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:51.669509  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:51.673696  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:51.702886  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:51.974449  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:52.169242  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:52.173314  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:52.198288  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:52.469293  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:52.669362  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:52.673169  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:52.698290  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:52.969364  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:53.169246  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:53.172949  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:53.197664  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:53.469314  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:53.669325  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:53.672901  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:53.697953  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:53.968992  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:54.171196  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:54.173028  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:54.198094  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:54.469883  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:54.668877  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:54.672774  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:54.701764  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:54.969668  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:55.169210  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:55.173714  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:55.199405  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:55.469252  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:55.669896  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:55.671996  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:55.697633  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:55.969037  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:56.168841  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:56.172570  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:56.198637  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:56.470397  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:56.669254  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:56.673065  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:56.698935  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:56.969690  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:57.170020  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:57.173353  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:57.199898  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:57.470796  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:57.669778  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:57.672151  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:57.698904  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:57.970423  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:58.169150  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:58.173117  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:58.198065  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:58.469969  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:58.669290  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:58.673121  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:58.698124  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:58.969865  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:59.170238  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:59.172750  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:59.198850  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:59.469537  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:56:59.669846  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:56:59.672545  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:56:59.698338  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:56:59.969511  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:00.199597  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:00.199890  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:00.207584  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:00.469083  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:00.669353  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:00.673180  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:00.698052  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:00.969789  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:01.168981  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:01.173333  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:01.198769  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:01.469770  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:01.669501  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:01.673225  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:01.698329  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:01.970567  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:02.169790  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:02.172647  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:02.199045  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:02.469628  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:02.670055  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:02.672265  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:02.698282  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:02.969478  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:03.169246  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:03.173045  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:03.198262  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:03.468725  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:03.669405  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:03.673518  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:03.698676  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:03.968911  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:04.169349  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:04.173291  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:04.198628  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:04.468912  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:04.669073  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:04.672671  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:04.698650  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:04.968715  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:05.169108  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:05.173602  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:05.199863  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:05.470164  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:05.670275  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:05.672915  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:05.699118  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:05.969961  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:06.169779  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:06.172906  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:06.198118  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:06.470916  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:06.672787  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:06.678680  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:06.699137  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:06.970076  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:07.170049  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:07.172683  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:07.199155  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:07.470341  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:07.669389  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:07.673300  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:07.698781  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:07.969274  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:08.169614  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:08.174370  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:08.198587  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:08.469161  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:08.668893  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:08.673006  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:08.698924  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:08.970107  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:09.169602  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:09.172558  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:09.198643  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:09.469659  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:09.669722  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:09.672192  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:09.697909  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:09.969110  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:10.173196  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:10.174031  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:10.273953  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:10.469183  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:10.669423  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:10.673591  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:10.698148  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:10.969621  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:11.169302  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:11.172801  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:11.198514  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:11.468783  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:11.668463  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:11.673393  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:11.698051  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:11.969533  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:12.169178  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:12.172902  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:12.198612  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:12.471663  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:12.669554  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:12.673313  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:12.698366  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:12.969812  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:13.170214  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:13.172694  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:13.198417  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:13.471405  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:13.669824  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:13.672413  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:13.698507  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:13.969739  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:14.169586  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:14.172969  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:14.198797  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:14.469671  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:14.669428  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:14.673003  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:14.697666  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:14.969049  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:15.169159  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:15.173634  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:15.198043  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:15.476910  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:15.669929  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:15.672616  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:15.698815  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:15.969993  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:16.169694  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:16.172132  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:16.198276  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:16.469646  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:16.669794  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:16.673094  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:16.698872  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:16.970166  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:17.169050  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:17.173272  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:17.198662  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:17.470844  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:17.669648  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:17.671944  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:17.700193  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:17.969368  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:18.174746  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 19:57:18.175147  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:18.273945  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:18.475206  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:18.672835  275094 kapi.go:107] duration metric: took 48.506928427s to wait for kubernetes.io/minikube-addons=registry ...
	I1227 19:57:18.674896  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:18.698319  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:18.971934  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:19.173201  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:19.198082  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:19.470651  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:19.673664  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:19.698517  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:19.969973  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:20.173643  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:20.198728  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:20.468942  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:20.673418  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:20.697860  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:20.970004  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:21.173370  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:21.198194  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:21.469992  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:21.673415  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:21.698675  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:21.972172  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:22.174832  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:22.275887  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:22.469206  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:22.673207  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:22.698523  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:22.969297  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:23.173799  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:23.199137  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:23.470503  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:23.672775  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:23.699089  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:23.969771  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:24.175174  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:24.198962  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:24.473701  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:24.673129  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:24.699147  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:24.968859  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:25.177560  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:25.273780  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:25.469922  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:25.675123  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:25.709376  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:25.978960  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:26.177092  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:26.198520  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:26.468975  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:26.672936  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:26.697742  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:26.968975  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:27.173758  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:27.198675  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:27.471300  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:27.673584  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:27.698497  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:27.969042  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:28.173597  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:28.198609  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:28.469231  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:28.673603  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:28.698516  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:28.970527  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:29.172851  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:29.198816  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:29.469236  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 19:57:29.673276  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:29.774356  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:29.969571  275094 kapi.go:107] duration metric: took 59.503912464s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1227 19:57:30.173403  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:30.198638  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:30.672957  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:30.698139  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:31.173569  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:31.198276  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:31.673286  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:31.697908  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:32.173131  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:32.197821  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:32.672971  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:32.697977  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:33.173124  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:33.198603  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:33.673221  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:33.698091  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:34.173908  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:34.198694  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:34.672869  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:34.698507  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:35.173083  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:35.199073  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:35.673649  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:35.698348  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:36.172639  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:36.198765  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:36.673216  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:36.697949  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:37.173655  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:37.198338  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:37.673996  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:37.697648  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:38.173101  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:38.198723  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:38.672770  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:38.698595  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:39.173358  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:39.198149  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:39.672932  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:39.698463  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:40.173183  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:40.197971  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:40.673616  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:40.698482  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:41.172702  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:41.198511  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:41.673107  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:41.698564  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:42.177251  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:42.201987  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:42.673942  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:42.697746  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:43.173558  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:43.198438  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:43.674003  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:43.699103  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:44.173386  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:44.198191  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:44.672683  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:44.698699  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:45.210527  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:45.236918  275094 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 19:57:45.685400  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:45.780533  275094 kapi.go:107] duration metric: took 1m12.085537047s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1227 19:57:45.784016  275094 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-686526 cluster.
	I1227 19:57:45.787060  275094 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1227 19:57:45.792388  275094 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1227 19:57:46.173442  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:46.672495  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:47.172944  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:47.673869  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:48.172832  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:48.673140  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:49.173671  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:49.673499  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:50.174425  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:50.672861  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:51.174022  275094 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 19:57:51.675318  275094 kapi.go:107] duration metric: took 1m21.505689456s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1227 19:57:51.678645  275094 out.go:179] * Enabled addons: default-storageclass, ingress-dns, inspektor-gadget, storage-provisioner-rancher, registry-creds, cloud-spanner, storage-provisioner, nvidia-device-plugin, amd-gpu-device-plugin, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1227 19:57:51.681617  275094 addons.go:530] duration metric: took 1m28.763322379s for enable addons: enabled=[default-storageclass ingress-dns inspektor-gadget storage-provisioner-rancher registry-creds cloud-spanner storage-provisioner nvidia-device-plugin amd-gpu-device-plugin metrics-server yakd volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1227 19:57:51.681680  275094 start.go:247] waiting for cluster config update ...
	I1227 19:57:51.681704  275094 start.go:256] writing updated cluster config ...
	I1227 19:57:51.681994  275094 ssh_runner.go:195] Run: rm -f paused
	I1227 19:57:51.686937  275094 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 19:57:51.690281  275094 pod_ready.go:83] waiting for pod "coredns-7d764666f9-xqfvw" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:57:51.698776  275094 pod_ready.go:94] pod "coredns-7d764666f9-xqfvw" is "Ready"
	I1227 19:57:51.698854  275094 pod_ready.go:86] duration metric: took 8.542954ms for pod "coredns-7d764666f9-xqfvw" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:57:51.701073  275094 pod_ready.go:83] waiting for pod "etcd-addons-686526" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:57:51.705869  275094 pod_ready.go:94] pod "etcd-addons-686526" is "Ready"
	I1227 19:57:51.705936  275094 pod_ready.go:86] duration metric: took 4.838045ms for pod "etcd-addons-686526" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:57:51.709923  275094 pod_ready.go:83] waiting for pod "kube-apiserver-addons-686526" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:57:51.714961  275094 pod_ready.go:94] pod "kube-apiserver-addons-686526" is "Ready"
	I1227 19:57:51.714989  275094 pod_ready.go:86] duration metric: took 5.038214ms for pod "kube-apiserver-addons-686526" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:57:51.718040  275094 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-686526" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:57:52.090844  275094 pod_ready.go:94] pod "kube-controller-manager-addons-686526" is "Ready"
	I1227 19:57:52.090885  275094 pod_ready.go:86] duration metric: took 372.809724ms for pod "kube-controller-manager-addons-686526" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:57:52.290822  275094 pod_ready.go:83] waiting for pod "kube-proxy-7n5r2" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:57:52.691337  275094 pod_ready.go:94] pod "kube-proxy-7n5r2" is "Ready"
	I1227 19:57:52.691366  275094 pod_ready.go:86] duration metric: took 400.515997ms for pod "kube-proxy-7n5r2" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:57:52.891361  275094 pod_ready.go:83] waiting for pod "kube-scheduler-addons-686526" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:57:53.290687  275094 pod_ready.go:94] pod "kube-scheduler-addons-686526" is "Ready"
	I1227 19:57:53.290715  275094 pod_ready.go:86] duration metric: took 399.329662ms for pod "kube-scheduler-addons-686526" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 19:57:53.290728  275094 pod_ready.go:40] duration metric: took 1.603755971s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 19:57:53.343854  275094 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 19:57:53.347183  275094 out.go:203] 
	W1227 19:57:53.350101  275094 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 19:57:53.352956  275094 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 19:57:53.355824  275094 out.go:179] * Done! kubectl is now configured to use "addons-686526" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 19:58:21 addons-686526 crio[828]: time="2025-12-27T19:58:21.759199698Z" level=info msg="Starting container: d721431e4b80b9e09c6b657dff1edea7618ed54e4242ad7b3701137dc9d317c6" id=2d80eb62-ad64-42d5-acbe-90ba32d585b4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 19:58:21 addons-686526 crio[828]: time="2025-12-27T19:58:21.760718501Z" level=info msg="Started container" PID=5417 containerID=d721431e4b80b9e09c6b657dff1edea7618ed54e4242ad7b3701137dc9d317c6 description=default/test-local-path/busybox id=2d80eb62-ad64-42d5-acbe-90ba32d585b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e66a072774ad35f5e698e3a7aba1ee024bff86a4e39c240c10a44cdb0430b7f2
	Dec 27 19:58:22 addons-686526 crio[828]: time="2025-12-27T19:58:22.788250156Z" level=info msg="Stopping pod sandbox: e66a072774ad35f5e698e3a7aba1ee024bff86a4e39c240c10a44cdb0430b7f2" id=ddaa8aec-a200-4af1-8a6e-d5d055665493 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 27 19:58:22 addons-686526 crio[828]: time="2025-12-27T19:58:22.788544026Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:e66a072774ad35f5e698e3a7aba1ee024bff86a4e39c240c10a44cdb0430b7f2 UID:b3ea5b1d-26f5-4e28-9144-502daa89a44b NetNS:/var/run/netns/c58b3978-8e57-4ea2-bda7-4809902ec16f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400151e1e8}] Aliases:map[]}"
	Dec 27 19:58:22 addons-686526 crio[828]: time="2025-12-27T19:58:22.788684504Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Dec 27 19:58:22 addons-686526 crio[828]: time="2025-12-27T19:58:22.814473631Z" level=info msg="Stopped pod sandbox: e66a072774ad35f5e698e3a7aba1ee024bff86a4e39c240c10a44cdb0430b7f2" id=ddaa8aec-a200-4af1-8a6e-d5d055665493 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 27 19:58:23 addons-686526 crio[828]: time="2025-12-27T19:58:23.804703869Z" level=info msg="Removing container: d721431e4b80b9e09c6b657dff1edea7618ed54e4242ad7b3701137dc9d317c6" id=59c062dd-74f5-4146-b62d-56501870eca7 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 19:58:23 addons-686526 crio[828]: time="2025-12-27T19:58:23.807643039Z" level=info msg="Error loading conmon cgroup of container d721431e4b80b9e09c6b657dff1edea7618ed54e4242ad7b3701137dc9d317c6: cgroup deleted" id=59c062dd-74f5-4146-b62d-56501870eca7 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 19:58:23 addons-686526 crio[828]: time="2025-12-27T19:58:23.82076683Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e/POD" id=85086b6d-0294-4305-8bc0-8bc20c6a2d92 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 19:58:23 addons-686526 crio[828]: time="2025-12-27T19:58:23.821054005Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 19:58:23 addons-686526 crio[828]: time="2025-12-27T19:58:23.824325322Z" level=info msg="Removed container d721431e4b80b9e09c6b657dff1edea7618ed54e4242ad7b3701137dc9d317c6: default/test-local-path/busybox" id=59c062dd-74f5-4146-b62d-56501870eca7 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 19:58:23 addons-686526 crio[828]: time="2025-12-27T19:58:23.859288181Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e Namespace:local-path-storage ID:5a7a162af742fd843f4f00260c1488163a4cd348c0256d878696f7a1ff065b0e UID:55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4 NetNS:/var/run/netns/a295a5be-7af5-49a8-ab45-23e1721a4688 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400151e688}] Aliases:map[]}"
	Dec 27 19:58:23 addons-686526 crio[828]: time="2025-12-27T19:58:23.861120719Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e to CNI network \"kindnet\" (type=ptp)"
	Dec 27 19:58:23 addons-686526 crio[828]: time="2025-12-27T19:58:23.8757966Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e Namespace:local-path-storage ID:5a7a162af742fd843f4f00260c1488163a4cd348c0256d878696f7a1ff065b0e UID:55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4 NetNS:/var/run/netns/a295a5be-7af5-49a8-ab45-23e1721a4688 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400151e688}] Aliases:map[]}"
	Dec 27 19:58:23 addons-686526 crio[828]: time="2025-12-27T19:58:23.875991822Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e for CNI network kindnet (type=ptp)"
	Dec 27 19:58:23 addons-686526 crio[828]: time="2025-12-27T19:58:23.888275971Z" level=info msg="Ran pod sandbox 5a7a162af742fd843f4f00260c1488163a4cd348c0256d878696f7a1ff065b0e with infra container: local-path-storage/helper-pod-delete-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e/POD" id=85086b6d-0294-4305-8bc0-8bc20c6a2d92 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 19:58:23 addons-686526 crio[828]: time="2025-12-27T19:58:23.889709138Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=8a6905cc-457c-4265-abd8-9b9e3d5f0562 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 19:58:23 addons-686526 crio[828]: time="2025-12-27T19:58:23.893989104Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=e68f683f-1569-4e9d-ac58-817e6be7f5e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 19:58:23 addons-686526 crio[828]: time="2025-12-27T19:58:23.903916818Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e/helper-pod" id=cedc918d-7927-4998-bf8e-1f4964321c03 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 19:58:23 addons-686526 crio[828]: time="2025-12-27T19:58:23.904049698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 19:58:23 addons-686526 crio[828]: time="2025-12-27T19:58:23.928041503Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 19:58:23 addons-686526 crio[828]: time="2025-12-27T19:58:23.928998313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 19:58:23 addons-686526 crio[828]: time="2025-12-27T19:58:23.970976012Z" level=info msg="Created container 2077a1197d67570637b683f7d6abc7ececec28c2525e3388f951bb67ea197a8c: local-path-storage/helper-pod-delete-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e/helper-pod" id=cedc918d-7927-4998-bf8e-1f4964321c03 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 19:58:23 addons-686526 crio[828]: time="2025-12-27T19:58:23.974261795Z" level=info msg="Starting container: 2077a1197d67570637b683f7d6abc7ececec28c2525e3388f951bb67ea197a8c" id=33a72939-f124-4593-9bdb-43532bdbd1cd name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 19:58:23 addons-686526 crio[828]: time="2025-12-27T19:58:23.98437614Z" level=info msg="Started container" PID=5519 containerID=2077a1197d67570637b683f7d6abc7ececec28c2525e3388f951bb67ea197a8c description=local-path-storage/helper-pod-delete-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e/helper-pod id=33a72939-f124-4593-9bdb-43532bdbd1cd name=/runtime.v1.RuntimeService/StartContainer sandboxID=5a7a162af742fd843f4f00260c1488163a4cd348c0256d878696f7a1ff065b0e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	2077a1197d675       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   5a7a162af742f       helper-pod-delete-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e   local-path-storage
	252c624bdeeee       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            7 seconds ago        Exited              helper-pod                               0                   98601d547d5ed       helper-pod-create-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e   local-path-storage
	130fe001f7046       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          8 seconds ago        Exited              registry-test                            0                   62ebfe617a392       registry-test                                                default
	c9cb12346ec47       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          29 seconds ago       Running             busybox                                  0                   e6a105698f713       busybox                                                      default
	ab98358f995b2       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             34 seconds ago       Running             controller                               0                   88c69933913d4       ingress-nginx-controller-7847b5c79c-nwqdq                    ingress-nginx
	4f8a5a06c11f0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 40 seconds ago       Running             gcp-auth                                 0                   e1e285e544285       gcp-auth-5bbcf684b5-77mdr                                    gcp-auth
	f7d0de1b69961       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          56 seconds ago       Running             csi-snapshotter                          0                   9e4c68bfded55       csi-hostpathplugin-zr686                                     kube-system
	aeb61d6ca819d       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          57 seconds ago       Running             csi-provisioner                          0                   9e4c68bfded55       csi-hostpathplugin-zr686                                     kube-system
	76e4419cf17d2       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            59 seconds ago       Running             liveness-probe                           0                   9e4c68bfded55       csi-hostpathplugin-zr686                                     kube-system
	8628a6c290886       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           About a minute ago   Running             hostpath                                 0                   9e4c68bfded55       csi-hostpathplugin-zr686                                     kube-system
	8b5869c0f3a2d       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                About a minute ago   Running             node-driver-registrar                    0                   9e4c68bfded55       csi-hostpathplugin-zr686                                     kube-system
	a48c69874e8b6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              patch                                    1                   96465cea62b2d       ingress-nginx-admission-patch-j4cs8                          ingress-nginx
	1c8b9f0185dbc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            About a minute ago   Running             gadget                                   0                   49e75c7f34349       gadget-mqphn                                                 gadget
	a2bd1af183b2e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              create                                   0                   c42dbc631921e       ingress-nginx-admission-create-fqtmw                         ingress-nginx
	4bfa93abefaef       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   b4537ded2cb6d       registry-proxy-x4f62                                         kube-system
	970faee02859b       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   dc90685ba9ceb       local-path-provisioner-c44bcd496-n2r4p                       local-path-storage
	db248b3a0ad2c       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   9e4c68bfded55       csi-hostpathplugin-zr686                                     kube-system
	14cd7f4eeeb99       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   0fb75971695ae       csi-hostpath-resizer-0                                       kube-system
	4060cf97180d9       nvcr.io/nvidia/k8s-device-plugin@sha256:10b7b747520ba2314061b5b319d3b2766b9cec1fd9404109c607e85b30af6905                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   ef7ae2ab967ec       nvidia-device-plugin-daemonset-5d8kz                         kube-system
	63f13e1a46344       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   9e53b92c7b116       snapshot-controller-6588d87457-scj48                         kube-system
	76dabe108a8cd       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   8ad49ff063546       kube-ingress-dns-minikube                                    kube-system
	5069e3ca60ffb       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   ae4b9c1551dfb       csi-hostpath-attacher-0                                      kube-system
	46d6b5f1b1e04       ghcr.io/manusa/yakd@sha256:0b7e831df7fe4ad1c8c56a736a8d66bd86e243f6777d3c512ead47199d8fbe1a                                                  About a minute ago   Running             yakd                                     0                   bf6e98b183a4d       yakd-dashboard-865bfb49b9-xqkrr                              yakd-dashboard
	59198603f9dcd       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   6f14fd25c5d44       metrics-server-5778bb4788-vd5cx                              kube-system
	dabe4d66c15ac       gcr.io/cloud-spanner-emulator/emulator@sha256:084e511546640743b2d25fe2ee59800bc7ec910acfc12175bad2270f159f5eba                               About a minute ago   Running             cloud-spanner-emulator                   0                   1bb7c1e5ade5d       cloud-spanner-emulator-5649ccbc87-dxw4p                      default
	6b3af5ed669f8       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   5373a63701e15       snapshot-controller-6588d87457-gkh4d                         kube-system
	78bf49f884889       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   5d7ed094e291e       registry-788cd7d5bc-6s25q                                    kube-system
	affa3c0ed5124       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   9e792eff87391       storage-provisioner                                          kube-system
	2d16e4494d6c0       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                                                             About a minute ago   Running             coredns                                  0                   db67a634b12d6       coredns-7d764666f9-xqfvw                                     kube-system
	acf0f77c3c712       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3                                           About a minute ago   Running             kindnet-cni                              0                   979fb132ef967       kindnet-5dhlc                                                kube-system
	527b6bec92865       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                                                             2 minutes ago        Running             kube-proxy                               0                   740852a492468       kube-proxy-7n5r2                                             kube-system
	f6bd3ae635b96       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                                                             2 minutes ago        Running             etcd                                     0                   8c06d095230c9       etcd-addons-686526                                           kube-system
	665372886f2f5       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                                                             2 minutes ago        Running             kube-controller-manager                  0                   668d8c8905ae6       kube-controller-manager-addons-686526                        kube-system
	46d7e3e9cbbf8       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                                                             2 minutes ago        Running             kube-scheduler                           0                   b1b0a7184a02b       kube-scheduler-addons-686526                                 kube-system
	2339b22f5aa5a       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                                                             2 minutes ago        Running             kube-apiserver                           0                   d9ba5d16f377f       kube-apiserver-addons-686526                                 kube-system
	
	
	==> coredns [2d16e4494d6c0f9e5b1eff95ad99704e381224b2dd00e0b326a3c2f5fdbe920c] <==
	[INFO] 10.244.0.14:33127 - 16223 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002593977s
	[INFO] 10.244.0.14:33127 - 1829 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000112852s
	[INFO] 10.244.0.14:33127 - 48888 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000080753s
	[INFO] 10.244.0.14:49649 - 7395 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00014577s
	[INFO] 10.244.0.14:49649 - 7165 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000069176s
	[INFO] 10.244.0.14:39215 - 2893 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000077636s
	[INFO] 10.244.0.14:39215 - 2704 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081385s
	[INFO] 10.244.0.14:39128 - 26417 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000074977s
	[INFO] 10.244.0.14:39128 - 26245 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000074362s
	[INFO] 10.244.0.14:48807 - 60993 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005379319s
	[INFO] 10.244.0.14:48807 - 60788 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00546351s
	[INFO] 10.244.0.14:47368 - 25647 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000139518s
	[INFO] 10.244.0.14:47368 - 25418 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000081074s
	[INFO] 10.244.0.20:59346 - 64356 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000284706s
	[INFO] 10.244.0.20:51343 - 14331 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000153245s
	[INFO] 10.244.0.20:46673 - 48425 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000191776s
	[INFO] 10.244.0.20:43493 - 51324 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000145786s
	[INFO] 10.244.0.20:42565 - 40582 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000207423s
	[INFO] 10.244.0.20:59880 - 1746 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000137172s
	[INFO] 10.244.0.20:44017 - 56069 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.012902425s
	[INFO] 10.244.0.20:45686 - 25693 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.026085864s
	[INFO] 10.244.0.20:38643 - 12591 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.009199435s
	[INFO] 10.244.0.20:43004 - 64941 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.011218309s
	[INFO] 10.244.0.22:38505 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000221092s
	[INFO] 10.244.0.22:56072 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00012767s
	
	
	==> describe nodes <==
	Name:               addons-686526
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-686526
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=addons-686526
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T19_56_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-686526
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-686526"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 19:56:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-686526
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 19:58:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 19:58:20 +0000   Sat, 27 Dec 2025 19:56:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 19:58:20 +0000   Sat, 27 Dec 2025 19:56:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 19:58:20 +0000   Sat, 27 Dec 2025 19:56:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 19:58:20 +0000   Sat, 27 Dec 2025 19:56:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-686526
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                c11b8406-f7d8-4ea9-a4e1-5f0be6fbe4e5
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  default                     cloud-spanner-emulator-5649ccbc87-dxw4p                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  gadget                      gadget-mqphn                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  gcp-auth                    gcp-auth-5bbcf684b5-77mdr                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  ingress-nginx               ingress-nginx-controller-7847b5c79c-nwqdq                     100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         115s
	  kube-system                 coredns-7d764666f9-xqfvw                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m2s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 csi-hostpathplugin-zr686                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 etcd-addons-686526                                            100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m7s
	  kube-system                 kindnet-5dhlc                                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m2s
	  kube-system                 kube-apiserver-addons-686526                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-controller-manager-addons-686526                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-7n5r2                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-scheduler-addons-686526                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 metrics-server-5778bb4788-vd5cx                               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         116s
	  kube-system                 nvidia-device-plugin-daemonset-5d8kz                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 registry-788cd7d5bc-6s25q                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 registry-creds-567fb78d95-djlhq                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 registry-proxy-x4f62                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 snapshot-controller-6588d87457-gkh4d                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 snapshot-controller-6588d87457-scj48                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  local-path-storage          helper-pod-delete-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  local-path-storage          local-path-provisioner-c44bcd496-n2r4p                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  yakd-dashboard              yakd-dashboard-865bfb49b9-xqkrr                               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     116s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  2m3s  node-controller  Node addons-686526 event: Registered Node addons-686526 in Controller
	
	
	==> dmesg <==
	[ +33.830488] overlayfs: idmapped layers are currently not supported
	[Dec27 19:10] overlayfs: idmapped layers are currently not supported
	[Dec27 19:11] overlayfs: idmapped layers are currently not supported
	[Dec27 19:12] overlayfs: idmapped layers are currently not supported
	[Dec27 19:14] overlayfs: idmapped layers are currently not supported
	[Dec27 19:20] overlayfs: idmapped layers are currently not supported
	[ +33.811090] overlayfs: idmapped layers are currently not supported
	[Dec27 19:21] overlayfs: idmapped layers are currently not supported
	[Dec27 19:23] overlayfs: idmapped layers are currently not supported
	[Dec27 19:24] overlayfs: idmapped layers are currently not supported
	[Dec27 19:25] overlayfs: idmapped layers are currently not supported
	[Dec27 19:26] overlayfs: idmapped layers are currently not supported
	[ +16.831724] overlayfs: idmapped layers are currently not supported
	[Dec27 19:27] overlayfs: idmapped layers are currently not supported
	[Dec27 19:28] overlayfs: idmapped layers are currently not supported
	[ +28.388596] overlayfs: idmapped layers are currently not supported
	[Dec27 19:29] overlayfs: idmapped layers are currently not supported
	[  +9.242530] overlayfs: idmapped layers are currently not supported
	[Dec27 19:30] overlayfs: idmapped layers are currently not supported
	[ +11.577339] overlayfs: idmapped layers are currently not supported
	[Dec27 19:32] overlayfs: idmapped layers are currently not supported
	[ +19.186532] overlayfs: idmapped layers are currently not supported
	[Dec27 19:34] overlayfs: idmapped layers are currently not supported
	[Dec27 19:54] kauditd_printk_skb: 8 callbacks suppressed
	[Dec27 19:56] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f6bd3ae635b96e69ae3cbe10aa5433f6dad9b05cc38a83cac219a431275bfa26] <==
	{"level":"info","ts":"2025-12-27T19:56:12.453990Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T19:56:12.713491Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T19:56:12.713601Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T19:56:12.713685Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2025-12-27T19:56:12.713744Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T19:56:12.713786Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T19:56:12.721483Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-12-27T19:56:12.721573Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T19:56:12.721619Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2025-12-27T19:56:12.721656Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-12-27T19:56:12.725084Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-686526 ClientURLs:[https://192.168.49.2:2379]}","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T19:56:12.725254Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T19:56:12.725437Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T19:56:12.727815Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T19:56:12.734896Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T19:56:12.741388Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T19:56:12.725464Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T19:56:12.751944Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T19:56:12.754880Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-12-27T19:56:12.777122Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T19:56:12.777277Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T19:56:12.777342Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T19:56:12.777404Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T19:56:12.777532Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T19:56:12.781882Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [4f8a5a06c11f00752453fda5ccc395e913760389931aeb218cae8e4d7eafa022] <==
	2025/12/27 19:57:45 GCP Auth Webhook started!
	2025/12/27 19:57:53 Ready to marshal response ...
	2025/12/27 19:57:53 Ready to write response ...
	2025/12/27 19:57:54 Ready to marshal response ...
	2025/12/27 19:57:54 Ready to write response ...
	2025/12/27 19:57:54 Ready to marshal response ...
	2025/12/27 19:57:54 Ready to write response ...
	2025/12/27 19:58:14 Ready to marshal response ...
	2025/12/27 19:58:14 Ready to write response ...
	2025/12/27 19:58:16 Ready to marshal response ...
	2025/12/27 19:58:16 Ready to write response ...
	2025/12/27 19:58:16 Ready to marshal response ...
	2025/12/27 19:58:16 Ready to write response ...
	2025/12/27 19:58:23 Ready to marshal response ...
	2025/12/27 19:58:23 Ready to write response ...
	
	
	==> kernel <==
	 19:58:25 up  1:40,  0 user,  load average: 2.68, 1.47, 1.04
	Linux addons-686526 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [acf0f77c3c7122dd9f3b2603143da9d067c290830c8d6e96ae769b64065a6f69] <==
	I1227 19:56:27.952481       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 19:56:27.952521       1 metrics.go:72] Registering metrics
	I1227 19:56:27.952695       1 controller.go:711] "Syncing nftables rules"
	I1227 19:56:37.650318       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 19:56:37.650372       1 main.go:301] handling current node
	I1227 19:56:47.652834       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 19:56:47.652869       1 main.go:301] handling current node
	I1227 19:56:57.650599       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 19:56:57.650640       1 main.go:301] handling current node
	I1227 19:57:07.651014       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 19:57:07.651089       1 main.go:301] handling current node
	I1227 19:57:17.650410       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 19:57:17.650458       1 main.go:301] handling current node
	I1227 19:57:27.650607       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 19:57:27.650648       1 main.go:301] handling current node
	I1227 19:57:37.657520       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 19:57:37.657556       1 main.go:301] handling current node
	I1227 19:57:47.652779       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 19:57:47.652810       1 main.go:301] handling current node
	I1227 19:57:57.651003       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 19:57:57.651035       1 main.go:301] handling current node
	I1227 19:58:07.653538       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 19:58:07.653573       1 main.go:301] handling current node
	I1227 19:58:17.651011       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 19:58:17.652664       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2339b22f5aa5ad86cbc68a8ee3d73f387563e2d32cb08c5b0ebd3fe231f755bf] <==
	W1227 19:56:30.735881       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1227 19:56:30.751486       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1227 19:56:33.543581       1 alloc.go:329] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.100.251.55"}
	W1227 19:56:38.035746       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.251.55:443: connect: connection refused
	E1227 19:56:38.035883       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.251.55:443: connect: connection refused" logger="UnhandledError"
	W1227 19:56:38.037656       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.251.55:443: connect: connection refused
	E1227 19:56:38.037779       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.251.55:443: connect: connection refused" logger="UnhandledError"
	W1227 19:56:38.135860       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.251.55:443: connect: connection refused
	E1227 19:56:38.136117       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.251.55:443: connect: connection refused" logger="UnhandledError"
	W1227 19:56:43.647487       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1227 19:56:43.663100       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1227 19:56:43.692330       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1227 19:56:43.709824       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1227 19:56:51.804617       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.205.158:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.205.158:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.205.158:443: connect: connection refused" logger="UnhandledError"
	W1227 19:56:51.804918       1 handler_proxy.go:99] no RequestInfo found in the context
	E1227 19:56:51.804975       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1227 19:56:51.805526       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.205.158:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.205.158:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.205.158:443: connect: connection refused" logger="UnhandledError"
	E1227 19:56:51.812859       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.205.158:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.205.158:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.205.158:443: connect: connection refused" logger="UnhandledError"
	E1227 19:56:51.835533       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.205.158:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.205.158:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.205.158:443: connect: connection refused" logger="UnhandledError"
	I1227 19:56:52.011122       1 handler.go:304] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1227 19:58:03.256823       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36154: use of closed network connection
	E1227 19:58:03.600168       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36194: use of closed network connection
	
	
	==> kube-controller-manager [665372886f2f5a56019a7acc4aaba64773a2800425add6614a10ed8d31727212] <==
	I1227 19:56:22.053862       1 shared_informer.go:377] "Caches are synced"
	I1227 19:56:22.053897       1 shared_informer.go:377] "Caches are synced"
	I1227 19:56:22.053997       1 shared_informer.go:377] "Caches are synced"
	I1227 19:56:22.056764       1 shared_informer.go:377] "Caches are synced"
	I1227 19:56:22.056847       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 19:56:22.056911       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="addons-686526"
	I1227 19:56:22.056965       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 19:56:22.056989       1 shared_informer.go:377] "Caches are synced"
	I1227 19:56:22.057005       1 shared_informer.go:377] "Caches are synced"
	I1227 19:56:22.057072       1 shared_informer.go:377] "Caches are synced"
	I1227 19:56:22.057162       1 shared_informer.go:377] "Caches are synced"
	I1227 19:56:22.057211       1 shared_informer.go:377] "Caches are synced"
	I1227 19:56:22.057266       1 shared_informer.go:377] "Caches are synced"
	I1227 19:56:22.069956       1 shared_informer.go:377] "Caches are synced"
	I1227 19:56:22.150447       1 shared_informer.go:377] "Caches are synced"
	I1227 19:56:22.156694       1 shared_informer.go:377] "Caches are synced"
	I1227 19:56:22.156717       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 19:56:22.156754       1 garbagecollector.go:169] "Proceeding to collect garbage"
	E1227 19:56:29.160700       1 replica_set.go:592] "Unhandled Error" err="sync \"kube-system/metrics-server-5778bb4788\" failed with pods \"metrics-server-5778bb4788-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1227 19:56:42.059598       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1227 19:56:52.076402       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1227 19:56:52.076482       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 19:56:52.159968       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 19:56:52.177578       1 shared_informer.go:377] "Caches are synced"
	I1227 19:56:52.260539       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [527b6bec92865051022b476e87f4a56edd36b846695cf95de5df71efbc3328fd] <==
	I1227 19:56:24.362558       1 server_linux.go:53] "Using iptables proxy"
	I1227 19:56:24.475066       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 19:56:24.575893       1 shared_informer.go:377] "Caches are synced"
	I1227 19:56:24.575938       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1227 19:56:24.576016       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 19:56:24.608103       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 19:56:24.608157       1 server_linux.go:136] "Using iptables Proxier"
	I1227 19:56:24.618699       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 19:56:24.619002       1 server.go:529] "Version info" version="v1.35.0"
	I1227 19:56:24.619015       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 19:56:24.632232       1 config.go:200] "Starting service config controller"
	I1227 19:56:24.632251       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 19:56:24.632268       1 config.go:106] "Starting endpoint slice config controller"
	I1227 19:56:24.632271       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 19:56:24.632293       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 19:56:24.632297       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 19:56:24.634567       1 config.go:309] "Starting node config controller"
	I1227 19:56:24.634580       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 19:56:24.634587       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 19:56:24.733730       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 19:56:24.733767       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 19:56:24.733802       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [46d7e3e9cbbf8984d7ddb16c6a136272c405ce68b4de5cfb0b73b424a67b97ba] <==
	E1227 19:56:15.241290       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 19:56:15.241365       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 19:56:15.241377       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 19:56:15.241551       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 19:56:15.241617       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 19:56:15.241714       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 19:56:15.241760       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 19:56:15.241807       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 19:56:15.241850       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 19:56:15.241901       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 19:56:15.241719       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 19:56:15.241991       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 19:56:16.103528       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 19:56:16.134681       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 19:56:16.135809       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 19:56:16.230009       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 19:56:16.232134       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 19:56:16.240918       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 19:56:16.269777       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 19:56:16.352467       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 19:56:16.366191       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 19:56:16.378550       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 19:56:16.423598       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 19:56:16.685080       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	I1227 19:56:19.317420       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 19:58:21 addons-686526 kubelet[1259]: I1227 19:58:21.751447    1259 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ba283160-fa26-43d9-9bbd-7a4bddbe8020" path="/var/lib/kubelet/pods/ba283160-fa26-43d9-9bbd-7a4bddbe8020/volumes"
	Dec 27 19:58:22 addons-686526 kubelet[1259]: I1227 19:58:22.906726    1259 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/b3ea5b1d-26f5-4e28-9144-502daa89a44b-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e\" (UniqueName: \"kubernetes.io/host-path/b3ea5b1d-26f5-4e28-9144-502daa89a44b-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e\") pod \"b3ea5b1d-26f5-4e28-9144-502daa89a44b\" (UID: \"b3ea5b1d-26f5-4e28-9144-502daa89a44b\") "
	Dec 27 19:58:22 addons-686526 kubelet[1259]: I1227 19:58:22.907237    1259 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/b3ea5b1d-26f5-4e28-9144-502daa89a44b-kube-api-access-lxwpm\" (UniqueName: \"kubernetes.io/projected/b3ea5b1d-26f5-4e28-9144-502daa89a44b-kube-api-access-lxwpm\") pod \"b3ea5b1d-26f5-4e28-9144-502daa89a44b\" (UID: \"b3ea5b1d-26f5-4e28-9144-502daa89a44b\") "
	Dec 27 19:58:22 addons-686526 kubelet[1259]: I1227 19:58:22.907340    1259 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/b3ea5b1d-26f5-4e28-9144-502daa89a44b-gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b3ea5b1d-26f5-4e28-9144-502daa89a44b-gcp-creds\") pod \"b3ea5b1d-26f5-4e28-9144-502daa89a44b\" (UID: \"b3ea5b1d-26f5-4e28-9144-502daa89a44b\") "
	Dec 27 19:58:22 addons-686526 kubelet[1259]: I1227 19:58:22.907548    1259 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ea5b1d-26f5-4e28-9144-502daa89a44b-gcp-creds" pod "b3ea5b1d-26f5-4e28-9144-502daa89a44b" (UID: "b3ea5b1d-26f5-4e28-9144-502daa89a44b"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 27 19:58:22 addons-686526 kubelet[1259]: I1227 19:58:22.907769    1259 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3ea5b1d-26f5-4e28-9144-502daa89a44b-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e" pod "b3ea5b1d-26f5-4e28-9144-502daa89a44b" (UID: "b3ea5b1d-26f5-4e28-9144-502daa89a44b"). InnerVolumeSpecName "pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 27 19:58:22 addons-686526 kubelet[1259]: I1227 19:58:22.917425    1259 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ea5b1d-26f5-4e28-9144-502daa89a44b-kube-api-access-lxwpm" pod "b3ea5b1d-26f5-4e28-9144-502daa89a44b" (UID: "b3ea5b1d-26f5-4e28-9144-502daa89a44b"). InnerVolumeSpecName "kube-api-access-lxwpm". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 27 19:58:23 addons-686526 kubelet[1259]: I1227 19:58:23.008502    1259 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b3ea5b1d-26f5-4e28-9144-502daa89a44b-gcp-creds\") on node \"addons-686526\" DevicePath \"\""
	Dec 27 19:58:23 addons-686526 kubelet[1259]: I1227 19:58:23.008677    1259 reconciler_common.go:299] "Volume detached for volume \"pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e\" (UniqueName: \"kubernetes.io/host-path/b3ea5b1d-26f5-4e28-9144-502daa89a44b-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e\") on node \"addons-686526\" DevicePath \"\""
	Dec 27 19:58:23 addons-686526 kubelet[1259]: I1227 19:58:23.008745    1259 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lxwpm\" (UniqueName: \"kubernetes.io/projected/b3ea5b1d-26f5-4e28-9144-502daa89a44b-kube-api-access-lxwpm\") on node \"addons-686526\" DevicePath \"\""
	Dec 27 19:58:23 addons-686526 kubelet[1259]: I1227 19:58:23.616398    1259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4-data\") pod \"helper-pod-delete-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e\" (UID: \"55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4\") " pod="local-path-storage/helper-pod-delete-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e"
	Dec 27 19:58:23 addons-686526 kubelet[1259]: I1227 19:58:23.616452    1259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4-script\") pod \"helper-pod-delete-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e\" (UID: \"55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4\") " pod="local-path-storage/helper-pod-delete-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e"
	Dec 27 19:58:23 addons-686526 kubelet[1259]: I1227 19:58:23.616478    1259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5spj\" (UniqueName: \"kubernetes.io/projected/55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4-kube-api-access-t5spj\") pod \"helper-pod-delete-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e\" (UID: \"55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4\") " pod="local-path-storage/helper-pod-delete-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e"
	Dec 27 19:58:23 addons-686526 kubelet[1259]: I1227 19:58:23.616535    1259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4-gcp-creds\") pod \"helper-pod-delete-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e\" (UID: \"55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4\") " pod="local-path-storage/helper-pod-delete-pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e"
	Dec 27 19:58:23 addons-686526 kubelet[1259]: I1227 19:58:23.749102    1259 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b3ea5b1d-26f5-4e28-9144-502daa89a44b" path="/var/lib/kubelet/pods/b3ea5b1d-26f5-4e28-9144-502daa89a44b/volumes"
	Dec 27 19:58:23 addons-686526 kubelet[1259]: I1227 19:58:23.803031    1259 scope.go:122] "RemoveContainer" containerID="d721431e4b80b9e09c6b657dff1edea7618ed54e4242ad7b3701137dc9d317c6"
	Dec 27 19:58:23 addons-686526 kubelet[1259]: W1227 19:58:23.882107    1259 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/9ea7fe87471b8694080849a8594d8c88b258a060dc9a4bf4fa5c68a0fbc5552e/crio-5a7a162af742fd843f4f00260c1488163a4cd348c0256d878696f7a1ff065b0e WatchSource:0}: Error finding container 5a7a162af742fd843f4f00260c1488163a4cd348c0256d878696f7a1ff065b0e: Status 404 returned error can't find the container with id 5a7a162af742fd843f4f00260c1488163a4cd348c0256d878696f7a1ff065b0e
	Dec 27 19:58:25 addons-686526 kubelet[1259]: I1227 19:58:25.943286    1259 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4-kube-api-access-t5spj\" (UniqueName: \"kubernetes.io/projected/55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4-kube-api-access-t5spj\") pod \"55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4\" (UID: \"55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4\") "
	Dec 27 19:58:25 addons-686526 kubelet[1259]: I1227 19:58:25.943357    1259 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4-gcp-creds\" (UniqueName: \"kubernetes.io/host-path/55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4-gcp-creds\") pod \"55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4\" (UID: \"55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4\") "
	Dec 27 19:58:25 addons-686526 kubelet[1259]: I1227 19:58:25.943378    1259 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4-data\" (UniqueName: \"kubernetes.io/host-path/55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4-data\") pod \"55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4\" (UID: \"55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4\") "
	Dec 27 19:58:25 addons-686526 kubelet[1259]: I1227 19:58:25.943412    1259 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4-script\" (UniqueName: \"kubernetes.io/configmap/55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4-script\") pod \"55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4\" (UID: \"55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4\") "
	Dec 27 19:58:25 addons-686526 kubelet[1259]: I1227 19:58:25.943858    1259 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4-script" pod "55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4" (UID: "55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Dec 27 19:58:25 addons-686526 kubelet[1259]: I1227 19:58:25.944131    1259 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4-gcp-creds" pod "55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4" (UID: "55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 27 19:58:25 addons-686526 kubelet[1259]: I1227 19:58:25.944186    1259 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4-data" pod "55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4" (UID: "55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 27 19:58:25 addons-686526 kubelet[1259]: I1227 19:58:25.950510    1259 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4-kube-api-access-t5spj" pod "55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4" (UID: "55c74470-5f41-4acd-a1ec-ff2e6e6f1bd4"). InnerVolumeSpecName "kube-api-access-t5spj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	
	
	==> storage-provisioner [affa3c0ed51244e760e065712febf0f9b147fb070147eea321d9eccfb748d170] <==
	W1227 19:58:01.686211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:03.690068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:03.694881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:05.697911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:05.702405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:07.706033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:07.710482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:09.713626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:09.723233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:11.727443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:11.732177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:13.735862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:13.744309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:15.747307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:15.753550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:17.756296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:17.761700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:19.767607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:19.774071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:21.781298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:21.789126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:23.793176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:23.803185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:25.806642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 19:58:25.812714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-686526 -n addons-686526
helpers_test.go:270: (dbg) Run:  kubectl --context addons-686526 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-fqtmw ingress-nginx-admission-patch-j4cs8 registry-creds-567fb78d95-djlhq
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-686526 describe pod ingress-nginx-admission-create-fqtmw ingress-nginx-admission-patch-j4cs8 registry-creds-567fb78d95-djlhq
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-686526 describe pod ingress-nginx-admission-create-fqtmw ingress-nginx-admission-patch-j4cs8 registry-creds-567fb78d95-djlhq: exit status 1 (99.971101ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fqtmw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-j4cs8" not found
	Error from server (NotFound): pods "registry-creds-567fb78d95-djlhq" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-686526 describe pod ingress-nginx-admission-create-fqtmw ingress-nginx-admission-patch-j4cs8 registry-creds-567fb78d95-djlhq: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-686526 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-686526 addons disable headlamp --alsologtostderr -v=1: exit status 11 (242.790987ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:58:26.910611  282435 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:58:26.911505  282435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:26.911520  282435 out.go:374] Setting ErrFile to fd 2...
	I1227 19:58:26.911526  282435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:26.911826  282435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 19:58:26.912154  282435 mustload.go:66] Loading cluster: addons-686526
	I1227 19:58:26.912559  282435 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:26.912582  282435 addons.go:622] checking whether the cluster is paused
	I1227 19:58:26.912725  282435 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:26.912744  282435 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:58:26.913301  282435 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:58:26.931918  282435 ssh_runner.go:195] Run: systemctl --version
	I1227 19:58:26.931986  282435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:58:26.950973  282435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:58:27.048191  282435 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:58:27.048278  282435 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:58:27.076916  282435 cri.go:96] found id: "f7d0de1b69961a288553903b4992c35c22e5e36f33340aca5549666ccae62780"
	I1227 19:58:27.076992  282435 cri.go:96] found id: "aeb61d6ca819d545ff012b4bb79b4b12a2d3e4fd7c017b8aff9b3f73e7e4fd72"
	I1227 19:58:27.077012  282435 cri.go:96] found id: "76e4419cf17d2bcb37cb37c985de6da6879900fefbf8d8753bdc6f8b601a7b70"
	I1227 19:58:27.077038  282435 cri.go:96] found id: "8628a6c29088613cf486c32e6965fba1763bf29fdb83553c63ccc780c26e53e1"
	I1227 19:58:27.077064  282435 cri.go:96] found id: "8b5869c0f3a2d08ab29fe49e15a8657d278dcd90725095d352af4569975b4092"
	I1227 19:58:27.077089  282435 cri.go:96] found id: "4bfa93abefaefe59e582f297d87729a6eae0fb1a9a0a16200706276a345bd9f5"
	I1227 19:58:27.077108  282435 cri.go:96] found id: "db248b3a0ad2c188ffc0bd6285c801a114ef2a163b57e80967eed7058d10b079"
	I1227 19:58:27.077131  282435 cri.go:96] found id: "14cd7f4eeeb99b20541874eb2f68a4348b74b68c609a377d4e92df28a82336e2"
	I1227 19:58:27.077159  282435 cri.go:96] found id: "4060cf97180d981d325d0d9e6eb59ef66cf733a447793c3c3c19354ffe8cc564"
	I1227 19:58:27.077181  282435 cri.go:96] found id: "63f13e1a463440ef808022bc00191127d600efb20f8a410beeafe0ff3eba5e18"
	I1227 19:58:27.077204  282435 cri.go:96] found id: "76dabe108a8cdf25e513799e72a1701938261cabbaf7677deb7cf44b74e6693e"
	I1227 19:58:27.077229  282435 cri.go:96] found id: "5069e3ca60ffbe2dae6fb5bf95131972cc927b0230086e1374f3aa33984f9a66"
	I1227 19:58:27.077257  282435 cri.go:96] found id: "59198603f9dcd704e4c9bf1e3690d726408d5fbe97ca91fbee22d027956132a4"
	I1227 19:58:27.077276  282435 cri.go:96] found id: "6b3af5ed669f8def1398487b65fab3dc84efc5016bc4e43413569ae9cf491fae"
	I1227 19:58:27.077298  282435 cri.go:96] found id: "78bf49f8848895542e9d07cd088de90af51710dcb99e3756d7a1ae5577d88b11"
	I1227 19:58:27.077333  282435 cri.go:96] found id: "affa3c0ed51244e760e065712febf0f9b147fb070147eea321d9eccfb748d170"
	I1227 19:58:27.077354  282435 cri.go:96] found id: "2d16e4494d6c0f9e5b1eff95ad99704e381224b2dd00e0b326a3c2f5fdbe920c"
	I1227 19:58:27.077378  282435 cri.go:96] found id: "acf0f77c3c7122dd9f3b2603143da9d067c290830c8d6e96ae769b64065a6f69"
	I1227 19:58:27.077403  282435 cri.go:96] found id: "527b6bec92865051022b476e87f4a56edd36b846695cf95de5df71efbc3328fd"
	I1227 19:58:27.077425  282435 cri.go:96] found id: "f6bd3ae635b96e69ae3cbe10aa5433f6dad9b05cc38a83cac219a431275bfa26"
	I1227 19:58:27.077480  282435 cri.go:96] found id: "665372886f2f5a56019a7acc4aaba64773a2800425add6614a10ed8d31727212"
	I1227 19:58:27.077509  282435 cri.go:96] found id: "46d7e3e9cbbf8984d7ddb16c6a136272c405ce68b4de5cfb0b73b424a67b97ba"
	I1227 19:58:27.077520  282435 cri.go:96] found id: "2339b22f5aa5ad86cbc68a8ee3d73f387563e2d32cb08c5b0ebd3fe231f755bf"
	I1227 19:58:27.077523  282435 cri.go:96] found id: ""
	I1227 19:58:27.077591  282435 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:58:27.092058  282435 out.go:203] 
	W1227 19:58:27.095024  282435 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:58:27.095049  282435 out.go:285] * 
	* 
	W1227 19:58:27.097935  282435 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:58:27.101036  282435 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-686526 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.38s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-dxw4p" [41104995-1102-41ad-9ae7-e8b788e1d4dd] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005941434s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-686526 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-686526 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (322.523628ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:58:23.499018  281805 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:58:23.500348  281805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:23.500396  281805 out.go:374] Setting ErrFile to fd 2...
	I1227 19:58:23.500427  281805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:23.502394  281805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 19:58:23.502775  281805 mustload.go:66] Loading cluster: addons-686526
	I1227 19:58:23.503158  281805 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:23.503187  281805 addons.go:622] checking whether the cluster is paused
	I1227 19:58:23.503296  281805 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:23.503306  281805 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:58:23.503834  281805 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:58:23.523513  281805 ssh_runner.go:195] Run: systemctl --version
	I1227 19:58:23.523569  281805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:58:23.543452  281805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:58:23.648707  281805 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:58:23.648792  281805 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:58:23.692460  281805 cri.go:96] found id: "f7d0de1b69961a288553903b4992c35c22e5e36f33340aca5549666ccae62780"
	I1227 19:58:23.692479  281805 cri.go:96] found id: "aeb61d6ca819d545ff012b4bb79b4b12a2d3e4fd7c017b8aff9b3f73e7e4fd72"
	I1227 19:58:23.692483  281805 cri.go:96] found id: "76e4419cf17d2bcb37cb37c985de6da6879900fefbf8d8753bdc6f8b601a7b70"
	I1227 19:58:23.692487  281805 cri.go:96] found id: "8628a6c29088613cf486c32e6965fba1763bf29fdb83553c63ccc780c26e53e1"
	I1227 19:58:23.692490  281805 cri.go:96] found id: "8b5869c0f3a2d08ab29fe49e15a8657d278dcd90725095d352af4569975b4092"
	I1227 19:58:23.692494  281805 cri.go:96] found id: "4bfa93abefaefe59e582f297d87729a6eae0fb1a9a0a16200706276a345bd9f5"
	I1227 19:58:23.692497  281805 cri.go:96] found id: "db248b3a0ad2c188ffc0bd6285c801a114ef2a163b57e80967eed7058d10b079"
	I1227 19:58:23.692500  281805 cri.go:96] found id: "14cd7f4eeeb99b20541874eb2f68a4348b74b68c609a377d4e92df28a82336e2"
	I1227 19:58:23.692503  281805 cri.go:96] found id: "4060cf97180d981d325d0d9e6eb59ef66cf733a447793c3c3c19354ffe8cc564"
	I1227 19:58:23.692516  281805 cri.go:96] found id: "63f13e1a463440ef808022bc00191127d600efb20f8a410beeafe0ff3eba5e18"
	I1227 19:58:23.692520  281805 cri.go:96] found id: "76dabe108a8cdf25e513799e72a1701938261cabbaf7677deb7cf44b74e6693e"
	I1227 19:58:23.692527  281805 cri.go:96] found id: "5069e3ca60ffbe2dae6fb5bf95131972cc927b0230086e1374f3aa33984f9a66"
	I1227 19:58:23.692530  281805 cri.go:96] found id: "59198603f9dcd704e4c9bf1e3690d726408d5fbe97ca91fbee22d027956132a4"
	I1227 19:58:23.692534  281805 cri.go:96] found id: "6b3af5ed669f8def1398487b65fab3dc84efc5016bc4e43413569ae9cf491fae"
	I1227 19:58:23.692552  281805 cri.go:96] found id: "78bf49f8848895542e9d07cd088de90af51710dcb99e3756d7a1ae5577d88b11"
	I1227 19:58:23.692557  281805 cri.go:96] found id: "affa3c0ed51244e760e065712febf0f9b147fb070147eea321d9eccfb748d170"
	I1227 19:58:23.692561  281805 cri.go:96] found id: "2d16e4494d6c0f9e5b1eff95ad99704e381224b2dd00e0b326a3c2f5fdbe920c"
	I1227 19:58:23.692564  281805 cri.go:96] found id: "acf0f77c3c7122dd9f3b2603143da9d067c290830c8d6e96ae769b64065a6f69"
	I1227 19:58:23.692567  281805 cri.go:96] found id: "527b6bec92865051022b476e87f4a56edd36b846695cf95de5df71efbc3328fd"
	I1227 19:58:23.692570  281805 cri.go:96] found id: "f6bd3ae635b96e69ae3cbe10aa5433f6dad9b05cc38a83cac219a431275bfa26"
	I1227 19:58:23.692575  281805 cri.go:96] found id: "665372886f2f5a56019a7acc4aaba64773a2800425add6614a10ed8d31727212"
	I1227 19:58:23.692578  281805 cri.go:96] found id: "46d7e3e9cbbf8984d7ddb16c6a136272c405ce68b4de5cfb0b73b424a67b97ba"
	I1227 19:58:23.692581  281805 cri.go:96] found id: "2339b22f5aa5ad86cbc68a8ee3d73f387563e2d32cb08c5b0ebd3fe231f755bf"
	I1227 19:58:23.692584  281805 cri.go:96] found id: ""
	I1227 19:58:23.692653  281805 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:58:23.714419  281805 out.go:203] 
	W1227 19:58:23.717429  281805 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:58:23.717476  281805 out.go:285] * 
	* 
	W1227 19:58:23.720396  281805 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:58:23.723658  281805 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-686526 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.34s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (7.47s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-686526 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-686526 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-686526 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-686526 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-686526 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-686526 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-686526 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [b3ea5b1d-26f5-4e28-9144-502daa89a44b] Pending
helpers_test.go:353: "test-local-path" [b3ea5b1d-26f5-4e28-9144-502daa89a44b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [b3ea5b1d-26f5-4e28-9144-502daa89a44b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 2.003716103s
addons_test.go:969: (dbg) Run:  kubectl --context addons-686526 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-686526 ssh "cat /opt/local-path-provisioner/pvc-ae4c0c89-1c28-4cca-8d71-55859c82f23e_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-686526 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-686526 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-686526 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-686526 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (361.518358ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:58:23.609798  281831 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:58:23.610611  281831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:23.610652  281831 out.go:374] Setting ErrFile to fd 2...
	I1227 19:58:23.610673  281831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:23.611080  281831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 19:58:23.611476  281831 mustload.go:66] Loading cluster: addons-686526
	I1227 19:58:23.612150  281831 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:23.612201  281831 addons.go:622] checking whether the cluster is paused
	I1227 19:58:23.612393  281831 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:23.612447  281831 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:58:23.613328  281831 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:58:23.631396  281831 ssh_runner.go:195] Run: systemctl --version
	I1227 19:58:23.631451  281831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:58:23.658004  281831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:58:23.777461  281831 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:58:23.777564  281831 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:58:23.862419  281831 cri.go:96] found id: "f7d0de1b69961a288553903b4992c35c22e5e36f33340aca5549666ccae62780"
	I1227 19:58:23.862439  281831 cri.go:96] found id: "aeb61d6ca819d545ff012b4bb79b4b12a2d3e4fd7c017b8aff9b3f73e7e4fd72"
	I1227 19:58:23.862443  281831 cri.go:96] found id: "76e4419cf17d2bcb37cb37c985de6da6879900fefbf8d8753bdc6f8b601a7b70"
	I1227 19:58:23.862447  281831 cri.go:96] found id: "8628a6c29088613cf486c32e6965fba1763bf29fdb83553c63ccc780c26e53e1"
	I1227 19:58:23.862450  281831 cri.go:96] found id: "8b5869c0f3a2d08ab29fe49e15a8657d278dcd90725095d352af4569975b4092"
	I1227 19:58:23.862456  281831 cri.go:96] found id: "4bfa93abefaefe59e582f297d87729a6eae0fb1a9a0a16200706276a345bd9f5"
	I1227 19:58:23.862459  281831 cri.go:96] found id: "db248b3a0ad2c188ffc0bd6285c801a114ef2a163b57e80967eed7058d10b079"
	I1227 19:58:23.862462  281831 cri.go:96] found id: "14cd7f4eeeb99b20541874eb2f68a4348b74b68c609a377d4e92df28a82336e2"
	I1227 19:58:23.862466  281831 cri.go:96] found id: "4060cf97180d981d325d0d9e6eb59ef66cf733a447793c3c3c19354ffe8cc564"
	I1227 19:58:23.862471  281831 cri.go:96] found id: "63f13e1a463440ef808022bc00191127d600efb20f8a410beeafe0ff3eba5e18"
	I1227 19:58:23.862475  281831 cri.go:96] found id: "76dabe108a8cdf25e513799e72a1701938261cabbaf7677deb7cf44b74e6693e"
	I1227 19:58:23.862478  281831 cri.go:96] found id: "5069e3ca60ffbe2dae6fb5bf95131972cc927b0230086e1374f3aa33984f9a66"
	I1227 19:58:23.862481  281831 cri.go:96] found id: "59198603f9dcd704e4c9bf1e3690d726408d5fbe97ca91fbee22d027956132a4"
	I1227 19:58:23.862483  281831 cri.go:96] found id: "6b3af5ed669f8def1398487b65fab3dc84efc5016bc4e43413569ae9cf491fae"
	I1227 19:58:23.862487  281831 cri.go:96] found id: "78bf49f8848895542e9d07cd088de90af51710dcb99e3756d7a1ae5577d88b11"
	I1227 19:58:23.862492  281831 cri.go:96] found id: "affa3c0ed51244e760e065712febf0f9b147fb070147eea321d9eccfb748d170"
	I1227 19:58:23.862495  281831 cri.go:96] found id: "2d16e4494d6c0f9e5b1eff95ad99704e381224b2dd00e0b326a3c2f5fdbe920c"
	I1227 19:58:23.862500  281831 cri.go:96] found id: "acf0f77c3c7122dd9f3b2603143da9d067c290830c8d6e96ae769b64065a6f69"
	I1227 19:58:23.862503  281831 cri.go:96] found id: "527b6bec92865051022b476e87f4a56edd36b846695cf95de5df71efbc3328fd"
	I1227 19:58:23.862506  281831 cri.go:96] found id: "f6bd3ae635b96e69ae3cbe10aa5433f6dad9b05cc38a83cac219a431275bfa26"
	I1227 19:58:23.862516  281831 cri.go:96] found id: "665372886f2f5a56019a7acc4aaba64773a2800425add6614a10ed8d31727212"
	I1227 19:58:23.862520  281831 cri.go:96] found id: "46d7e3e9cbbf8984d7ddb16c6a136272c405ce68b4de5cfb0b73b424a67b97ba"
	I1227 19:58:23.862523  281831 cri.go:96] found id: "2339b22f5aa5ad86cbc68a8ee3d73f387563e2d32cb08c5b0ebd3fe231f755bf"
	I1227 19:58:23.862525  281831 cri.go:96] found id: ""
	I1227 19:58:23.862573  281831 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:58:23.895929  281831 out.go:203] 
	W1227 19:58:23.899977  281831 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:58:23.900007  281831 out.go:285] * 
	* 
	W1227 19:58:23.903166  281831 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:58:23.906490  281831 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-686526 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (7.47s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-5d8kz" [34367370-48b0-4086-b7e4-1c176da80c81] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004509687s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-686526 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-686526 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (309.067293ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:58:16.210023  281408 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:58:16.221553  281408 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:16.221576  281408 out.go:374] Setting ErrFile to fd 2...
	I1227 19:58:16.221582  281408 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:16.221877  281408 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 19:58:16.222190  281408 mustload.go:66] Loading cluster: addons-686526
	I1227 19:58:16.222569  281408 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:16.222580  281408 addons.go:622] checking whether the cluster is paused
	I1227 19:58:16.222686  281408 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:16.222696  281408 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:58:16.223265  281408 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:58:16.244552  281408 ssh_runner.go:195] Run: systemctl --version
	I1227 19:58:16.244618  281408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:58:16.265943  281408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:58:16.364583  281408 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:58:16.364656  281408 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:58:16.415170  281408 cri.go:96] found id: "f7d0de1b69961a288553903b4992c35c22e5e36f33340aca5549666ccae62780"
	I1227 19:58:16.415187  281408 cri.go:96] found id: "aeb61d6ca819d545ff012b4bb79b4b12a2d3e4fd7c017b8aff9b3f73e7e4fd72"
	I1227 19:58:16.415192  281408 cri.go:96] found id: "76e4419cf17d2bcb37cb37c985de6da6879900fefbf8d8753bdc6f8b601a7b70"
	I1227 19:58:16.415196  281408 cri.go:96] found id: "8628a6c29088613cf486c32e6965fba1763bf29fdb83553c63ccc780c26e53e1"
	I1227 19:58:16.415199  281408 cri.go:96] found id: "8b5869c0f3a2d08ab29fe49e15a8657d278dcd90725095d352af4569975b4092"
	I1227 19:58:16.415203  281408 cri.go:96] found id: "4bfa93abefaefe59e582f297d87729a6eae0fb1a9a0a16200706276a345bd9f5"
	I1227 19:58:16.415206  281408 cri.go:96] found id: "db248b3a0ad2c188ffc0bd6285c801a114ef2a163b57e80967eed7058d10b079"
	I1227 19:58:16.415209  281408 cri.go:96] found id: "14cd7f4eeeb99b20541874eb2f68a4348b74b68c609a377d4e92df28a82336e2"
	I1227 19:58:16.415212  281408 cri.go:96] found id: "4060cf97180d981d325d0d9e6eb59ef66cf733a447793c3c3c19354ffe8cc564"
	I1227 19:58:16.415220  281408 cri.go:96] found id: "63f13e1a463440ef808022bc00191127d600efb20f8a410beeafe0ff3eba5e18"
	I1227 19:58:16.415224  281408 cri.go:96] found id: "76dabe108a8cdf25e513799e72a1701938261cabbaf7677deb7cf44b74e6693e"
	I1227 19:58:16.415227  281408 cri.go:96] found id: "5069e3ca60ffbe2dae6fb5bf95131972cc927b0230086e1374f3aa33984f9a66"
	I1227 19:58:16.415229  281408 cri.go:96] found id: "59198603f9dcd704e4c9bf1e3690d726408d5fbe97ca91fbee22d027956132a4"
	I1227 19:58:16.415232  281408 cri.go:96] found id: "6b3af5ed669f8def1398487b65fab3dc84efc5016bc4e43413569ae9cf491fae"
	I1227 19:58:16.415235  281408 cri.go:96] found id: "78bf49f8848895542e9d07cd088de90af51710dcb99e3756d7a1ae5577d88b11"
	I1227 19:58:16.415240  281408 cri.go:96] found id: "affa3c0ed51244e760e065712febf0f9b147fb070147eea321d9eccfb748d170"
	I1227 19:58:16.415244  281408 cri.go:96] found id: "2d16e4494d6c0f9e5b1eff95ad99704e381224b2dd00e0b326a3c2f5fdbe920c"
	I1227 19:58:16.415247  281408 cri.go:96] found id: "acf0f77c3c7122dd9f3b2603143da9d067c290830c8d6e96ae769b64065a6f69"
	I1227 19:58:16.415250  281408 cri.go:96] found id: "527b6bec92865051022b476e87f4a56edd36b846695cf95de5df71efbc3328fd"
	I1227 19:58:16.415253  281408 cri.go:96] found id: "f6bd3ae635b96e69ae3cbe10aa5433f6dad9b05cc38a83cac219a431275bfa26"
	I1227 19:58:16.415258  281408 cri.go:96] found id: "665372886f2f5a56019a7acc4aaba64773a2800425add6614a10ed8d31727212"
	I1227 19:58:16.415261  281408 cri.go:96] found id: "46d7e3e9cbbf8984d7ddb16c6a136272c405ce68b4de5cfb0b73b424a67b97ba"
	I1227 19:58:16.415264  281408 cri.go:96] found id: "2339b22f5aa5ad86cbc68a8ee3d73f387563e2d32cb08c5b0ebd3fe231f755bf"
	I1227 19:58:16.415267  281408 cri.go:96] found id: ""
	I1227 19:58:16.415319  281408 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:58:16.432666  281408 out.go:203] 
	W1227 19:58:16.435610  281408 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:58:16.435636  281408 out.go:285] * 
	* 
	W1227 19:58:16.438619  281408 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:58:16.441747  281408 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-686526 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.31s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-865bfb49b9-xqkrr" [5daf1e3f-a792-44b7-9087-0e0e6a54d6c7] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003249964s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-686526 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-686526 addons disable yakd --alsologtostderr -v=1: exit status 11 (264.142768ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 19:58:09.919254  281287 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:58:09.920988  281287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:09.921038  281287 out.go:374] Setting ErrFile to fd 2...
	I1227 19:58:09.921060  281287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:58:09.921359  281287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 19:58:09.921791  281287 mustload.go:66] Loading cluster: addons-686526
	I1227 19:58:09.922261  281287 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:09.922312  281287 addons.go:622] checking whether the cluster is paused
	I1227 19:58:09.922447  281287 config.go:182] Loaded profile config "addons-686526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 19:58:09.922484  281287 host.go:66] Checking if "addons-686526" exists ...
	I1227 19:58:09.923002  281287 cli_runner.go:164] Run: docker container inspect addons-686526 --format={{.State.Status}}
	I1227 19:58:09.940813  281287 ssh_runner.go:195] Run: systemctl --version
	I1227 19:58:09.940865  281287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686526
	I1227 19:58:09.962602  281287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/addons-686526/id_rsa Username:docker}
	I1227 19:58:10.061366  281287 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 19:58:10.061515  281287 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 19:58:10.099580  281287 cri.go:96] found id: "f7d0de1b69961a288553903b4992c35c22e5e36f33340aca5549666ccae62780"
	I1227 19:58:10.099606  281287 cri.go:96] found id: "aeb61d6ca819d545ff012b4bb79b4b12a2d3e4fd7c017b8aff9b3f73e7e4fd72"
	I1227 19:58:10.099611  281287 cri.go:96] found id: "76e4419cf17d2bcb37cb37c985de6da6879900fefbf8d8753bdc6f8b601a7b70"
	I1227 19:58:10.099615  281287 cri.go:96] found id: "8628a6c29088613cf486c32e6965fba1763bf29fdb83553c63ccc780c26e53e1"
	I1227 19:58:10.099619  281287 cri.go:96] found id: "8b5869c0f3a2d08ab29fe49e15a8657d278dcd90725095d352af4569975b4092"
	I1227 19:58:10.099622  281287 cri.go:96] found id: "4bfa93abefaefe59e582f297d87729a6eae0fb1a9a0a16200706276a345bd9f5"
	I1227 19:58:10.099626  281287 cri.go:96] found id: "db248b3a0ad2c188ffc0bd6285c801a114ef2a163b57e80967eed7058d10b079"
	I1227 19:58:10.099629  281287 cri.go:96] found id: "14cd7f4eeeb99b20541874eb2f68a4348b74b68c609a377d4e92df28a82336e2"
	I1227 19:58:10.099633  281287 cri.go:96] found id: "4060cf97180d981d325d0d9e6eb59ef66cf733a447793c3c3c19354ffe8cc564"
	I1227 19:58:10.099639  281287 cri.go:96] found id: "63f13e1a463440ef808022bc00191127d600efb20f8a410beeafe0ff3eba5e18"
	I1227 19:58:10.099643  281287 cri.go:96] found id: "76dabe108a8cdf25e513799e72a1701938261cabbaf7677deb7cf44b74e6693e"
	I1227 19:58:10.099651  281287 cri.go:96] found id: "5069e3ca60ffbe2dae6fb5bf95131972cc927b0230086e1374f3aa33984f9a66"
	I1227 19:58:10.099658  281287 cri.go:96] found id: "59198603f9dcd704e4c9bf1e3690d726408d5fbe97ca91fbee22d027956132a4"
	I1227 19:58:10.099662  281287 cri.go:96] found id: "6b3af5ed669f8def1398487b65fab3dc84efc5016bc4e43413569ae9cf491fae"
	I1227 19:58:10.099665  281287 cri.go:96] found id: "78bf49f8848895542e9d07cd088de90af51710dcb99e3756d7a1ae5577d88b11"
	I1227 19:58:10.099670  281287 cri.go:96] found id: "affa3c0ed51244e760e065712febf0f9b147fb070147eea321d9eccfb748d170"
	I1227 19:58:10.099673  281287 cri.go:96] found id: "2d16e4494d6c0f9e5b1eff95ad99704e381224b2dd00e0b326a3c2f5fdbe920c"
	I1227 19:58:10.099678  281287 cri.go:96] found id: "acf0f77c3c7122dd9f3b2603143da9d067c290830c8d6e96ae769b64065a6f69"
	I1227 19:58:10.099682  281287 cri.go:96] found id: "527b6bec92865051022b476e87f4a56edd36b846695cf95de5df71efbc3328fd"
	I1227 19:58:10.099697  281287 cri.go:96] found id: "f6bd3ae635b96e69ae3cbe10aa5433f6dad9b05cc38a83cac219a431275bfa26"
	I1227 19:58:10.099702  281287 cri.go:96] found id: "665372886f2f5a56019a7acc4aaba64773a2800425add6614a10ed8d31727212"
	I1227 19:58:10.099705  281287 cri.go:96] found id: "46d7e3e9cbbf8984d7ddb16c6a136272c405ce68b4de5cfb0b73b424a67b97ba"
	I1227 19:58:10.099708  281287 cri.go:96] found id: "2339b22f5aa5ad86cbc68a8ee3d73f387563e2d32cb08c5b0ebd3fe231f755bf"
	I1227 19:58:10.099712  281287 cri.go:96] found id: ""
	I1227 19:58:10.099776  281287 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 19:58:10.115737  281287 out.go:203] 
	W1227 19:58:10.118695  281287 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:58:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 19:58:10.118728  281287 out.go:285] * 
	* 
	W1227 19:58:10.122043  281287 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 19:58:10.125147  281287 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-686526 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.27s)

                                                
                                    
x
+
TestForceSystemdFlag (508.5s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-604544 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-604544 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 109 (8m22.841266185s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-604544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-604544" primary control-plane node in "force-systemd-flag-604544" cluster
	* Pulling base image v0.0.48-1766570851-22316 ...
	* Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:48:58.022662  474910 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:48:58.022826  474910 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:48:58.022837  474910 out.go:374] Setting ErrFile to fd 2...
	I1227 20:48:58.022844  474910 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:48:58.023323  474910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:48:58.027649  474910 out.go:368] Setting JSON to false
	I1227 20:48:58.028816  474910 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9090,"bootTime":1766859448,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:48:58.028911  474910 start.go:143] virtualization:  
	I1227 20:48:58.032577  474910 out.go:179] * [force-systemd-flag-604544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:48:58.037174  474910 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:48:58.037290  474910 notify.go:221] Checking for updates...
	I1227 20:48:58.043960  474910 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:48:58.047178  474910 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:48:58.050413  474910 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:48:58.053650  474910 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:48:58.056754  474910 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:48:58.060387  474910 config.go:182] Loaded profile config "force-systemd-env-859716": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:48:58.060506  474910 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:48:58.090572  474910 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:48:58.090696  474910 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:48:58.150506  474910 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:48:58.141930476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:48:58.150610  474910 docker.go:319] overlay module found
	I1227 20:48:58.153805  474910 out.go:179] * Using the docker driver based on user configuration
	I1227 20:48:58.156787  474910 start.go:309] selected driver: docker
	I1227 20:48:58.156805  474910 start.go:928] validating driver "docker" against <nil>
	I1227 20:48:58.156819  474910 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:48:58.157586  474910 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:48:58.209159  474910 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:48:58.200108623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:48:58.209297  474910 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 20:48:58.209537  474910 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 20:48:58.212623  474910 out.go:179] * Using Docker driver with root privileges
	I1227 20:48:58.215607  474910 cni.go:84] Creating CNI manager for ""
	I1227 20:48:58.215667  474910 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:48:58.215685  474910 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 20:48:58.215753  474910 start.go:353] cluster config:
	{Name:force-systemd-flag-604544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-604544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:48:58.218830  474910 out.go:179] * Starting "force-systemd-flag-604544" primary control-plane node in "force-systemd-flag-604544" cluster
	I1227 20:48:58.221703  474910 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:48:58.224591  474910 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:48:58.227405  474910 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:48:58.227455  474910 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:48:58.227468  474910 cache.go:65] Caching tarball of preloaded images
	I1227 20:48:58.227500  474910 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:48:58.227550  474910 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:48:58.227561  474910 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:48:58.227666  474910 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/config.json ...
	I1227 20:48:58.227682  474910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/config.json: {Name:mk9ddeff611679779470328b0716153b904d87e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:48:58.246023  474910 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:48:58.246051  474910 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:48:58.246065  474910 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:48:58.246093  474910 start.go:360] acquireMachinesLock for force-systemd-flag-604544: {Name:mk858d2836eca811f8888fdbe3932081e00f5ad7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:48:58.246203  474910 start.go:364] duration metric: took 89.917µs to acquireMachinesLock for "force-systemd-flag-604544"
	I1227 20:48:58.246242  474910 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-604544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-604544 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:48:58.246307  474910 start.go:125] createHost starting for "" (driver="docker")
	I1227 20:48:58.249698  474910 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:48:58.249922  474910 start.go:159] libmachine.API.Create for "force-systemd-flag-604544" (driver="docker")
	I1227 20:48:58.249955  474910 client.go:173] LocalClient.Create starting
	I1227 20:48:58.250041  474910 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem
	I1227 20:48:58.250080  474910 main.go:144] libmachine: Decoding PEM data...
	I1227 20:48:58.250100  474910 main.go:144] libmachine: Parsing certificate...
	I1227 20:48:58.250156  474910 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem
	I1227 20:48:58.250183  474910 main.go:144] libmachine: Decoding PEM data...
	I1227 20:48:58.250197  474910 main.go:144] libmachine: Parsing certificate...
	I1227 20:48:58.250606  474910 cli_runner.go:164] Run: docker network inspect force-systemd-flag-604544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:48:58.265737  474910 cli_runner.go:211] docker network inspect force-systemd-flag-604544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:48:58.265813  474910 network_create.go:284] running [docker network inspect force-systemd-flag-604544] to gather additional debugging logs...
	I1227 20:48:58.265833  474910 cli_runner.go:164] Run: docker network inspect force-systemd-flag-604544
	W1227 20:48:58.281272  474910 cli_runner.go:211] docker network inspect force-systemd-flag-604544 returned with exit code 1
	I1227 20:48:58.281303  474910 network_create.go:287] error running [docker network inspect force-systemd-flag-604544]: docker network inspect force-systemd-flag-604544: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-604544 not found
	I1227 20:48:58.281317  474910 network_create.go:289] output of [docker network inspect force-systemd-flag-604544]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-604544 not found
	
	** /stderr **
	I1227 20:48:58.281421  474910 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:48:58.297571  474910 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9521cb9225c5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:1d:ef:38:b7:a6} reservation:<nil>}
	I1227 20:48:58.297942  474910 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-68d11cc2ab47 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:8d:ad:37:cb:fe} reservation:<nil>}
	I1227 20:48:58.298209  474910 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d3b7cfff4895 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:4a:e3:08:10:2f} reservation:<nil>}
	I1227 20:48:58.298488  474910 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-58ce77e21f34 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b2:3c:b3:af:25:63} reservation:<nil>}
	I1227 20:48:58.298906  474910 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400195df90}
	I1227 20:48:58.298935  474910 network_create.go:124] attempt to create docker network force-systemd-flag-604544 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 20:48:58.298993  474910 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-604544 force-systemd-flag-604544
	I1227 20:48:58.368476  474910 network_create.go:108] docker network force-systemd-flag-604544 192.168.85.0/24 created
	I1227 20:48:58.368511  474910 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-604544" container
	I1227 20:48:58.368597  474910 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:48:58.383925  474910 cli_runner.go:164] Run: docker volume create force-systemd-flag-604544 --label name.minikube.sigs.k8s.io=force-systemd-flag-604544 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:48:58.401068  474910 oci.go:103] Successfully created a docker volume force-systemd-flag-604544
	I1227 20:48:58.401171  474910 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-604544-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-604544 --entrypoint /usr/bin/test -v force-systemd-flag-604544:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:48:58.940729  474910 oci.go:107] Successfully prepared a docker volume force-systemd-flag-604544
	I1227 20:48:58.940793  474910 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:48:58.940803  474910 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 20:48:58.940889  474910 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-604544:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 20:49:03.121551  474910 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-604544:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.180622352s)
	I1227 20:49:03.121585  474910 kic.go:203] duration metric: took 4.180778663s to extract preloaded images to volume ...
	W1227 20:49:03.121727  474910 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 20:49:03.121837  474910 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:49:03.185600  474910 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-604544 --name force-systemd-flag-604544 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-604544 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-604544 --network force-systemd-flag-604544 --ip 192.168.85.2 --volume force-systemd-flag-604544:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:49:03.516910  474910 cli_runner.go:164] Run: docker container inspect force-systemd-flag-604544 --format={{.State.Running}}
	I1227 20:49:03.540710  474910 cli_runner.go:164] Run: docker container inspect force-systemd-flag-604544 --format={{.State.Status}}
	I1227 20:49:03.560817  474910 cli_runner.go:164] Run: docker exec force-systemd-flag-604544 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:49:03.612063  474910 oci.go:144] the created container "force-systemd-flag-604544" has a running status.
	I1227 20:49:03.612090  474910 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-flag-604544/id_rsa...
	I1227 20:49:03.827829  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-flag-604544/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 20:49:03.827933  474910 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-flag-604544/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:49:03.851182  474910 cli_runner.go:164] Run: docker container inspect force-systemd-flag-604544 --format={{.State.Status}}
	I1227 20:49:03.875121  474910 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:49:03.875140  474910 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-604544 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:49:03.954889  474910 cli_runner.go:164] Run: docker container inspect force-systemd-flag-604544 --format={{.State.Status}}
	I1227 20:49:03.975939  474910 machine.go:94] provisionDockerMachine start ...
	I1227 20:49:03.976121  474910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-604544
	I1227 20:49:04.008338  474910 main.go:144] libmachine: Using SSH client type: native
	I1227 20:49:04.008719  474910 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1227 20:49:04.008809  474910 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:49:04.009686  474910 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45156->127.0.0.1:33398: read: connection reset by peer
	I1227 20:49:07.148948  474910 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-604544
	
	I1227 20:49:07.148974  474910 ubuntu.go:182] provisioning hostname "force-systemd-flag-604544"
	I1227 20:49:07.149045  474910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-604544
	I1227 20:49:07.166303  474910 main.go:144] libmachine: Using SSH client type: native
	I1227 20:49:07.166622  474910 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1227 20:49:07.166640  474910 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-604544 && echo "force-systemd-flag-604544" | sudo tee /etc/hostname
	I1227 20:49:07.311910  474910 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-604544
	
	I1227 20:49:07.312019  474910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-604544
	I1227 20:49:07.329329  474910 main.go:144] libmachine: Using SSH client type: native
	I1227 20:49:07.329857  474910 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1227 20:49:07.329886  474910 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-604544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-604544/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-604544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:49:07.465721  474910 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:49:07.465750  474910 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:49:07.465771  474910 ubuntu.go:190] setting up certificates
	I1227 20:49:07.465821  474910 provision.go:84] configureAuth start
	I1227 20:49:07.465897  474910 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-604544
	I1227 20:49:07.484861  474910 provision.go:143] copyHostCerts
	I1227 20:49:07.484917  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:49:07.484955  474910 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:49:07.484967  474910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:49:07.485044  474910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:49:07.485156  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:49:07.485180  474910 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:49:07.485191  474910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:49:07.485224  474910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:49:07.485278  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:49:07.485296  474910 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:49:07.485305  474910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:49:07.485330  474910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:49:07.485380  474910 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-604544 san=[127.0.0.1 192.168.85.2 force-systemd-flag-604544 localhost minikube]
	I1227 20:49:08.210825  474910 provision.go:177] copyRemoteCerts
	I1227 20:49:08.210891  474910 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:49:08.210931  474910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-604544
	I1227 20:49:08.227949  474910 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-flag-604544/id_rsa Username:docker}
	I1227 20:49:08.325095  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:49:08.325156  474910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:49:08.342545  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:49:08.342619  474910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:49:08.360228  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:49:08.360290  474910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 20:49:08.377376  474910 provision.go:87] duration metric: took 911.524856ms to configureAuth
	I1227 20:49:08.377415  474910 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:49:08.377621  474910 config.go:182] Loaded profile config "force-systemd-flag-604544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:49:08.377740  474910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-604544
	I1227 20:49:08.394742  474910 main.go:144] libmachine: Using SSH client type: native
	I1227 20:49:08.395076  474910 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1227 20:49:08.395097  474910 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:49:08.688359  474910 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:49:08.688382  474910 machine.go:97] duration metric: took 4.712423977s to provisionDockerMachine
	I1227 20:49:08.688408  474910 client.go:176] duration metric: took 10.438425367s to LocalClient.Create
	I1227 20:49:08.688422  474910 start.go:167] duration metric: took 10.438501246s to libmachine.API.Create "force-systemd-flag-604544"
	I1227 20:49:08.688429  474910 start.go:293] postStartSetup for "force-systemd-flag-604544" (driver="docker")
	I1227 20:49:08.688439  474910 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:49:08.688498  474910 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:49:08.688538  474910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-604544
	I1227 20:49:08.706419  474910 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-flag-604544/id_rsa Username:docker}
	I1227 20:49:08.806544  474910 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:49:08.810249  474910 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:49:08.810279  474910 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:49:08.810291  474910 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:49:08.810344  474910 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:49:08.810439  474910 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:49:08.810450  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:49:08.810547  474910 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:49:08.818843  474910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:49:08.837555  474910 start.go:296] duration metric: took 149.110226ms for postStartSetup
	I1227 20:49:08.837944  474910 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-604544
	I1227 20:49:08.856382  474910 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/config.json ...
	I1227 20:49:08.856668  474910 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:49:08.856893  474910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-604544
	I1227 20:49:08.875511  474910 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-flag-604544/id_rsa Username:docker}
	I1227 20:49:08.971069  474910 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:49:08.976007  474910 start.go:128] duration metric: took 10.729684535s to createHost
	I1227 20:49:08.976037  474910 start.go:83] releasing machines lock for "force-systemd-flag-604544", held for 10.729817988s
	I1227 20:49:08.976111  474910 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-604544
	I1227 20:49:08.992776  474910 ssh_runner.go:195] Run: cat /version.json
	I1227 20:49:08.992792  474910 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:49:08.992829  474910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-604544
	I1227 20:49:08.992854  474910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-604544
	I1227 20:49:09.015032  474910 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-flag-604544/id_rsa Username:docker}
	I1227 20:49:09.031160  474910 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-flag-604544/id_rsa Username:docker}
	I1227 20:49:09.208999  474910 ssh_runner.go:195] Run: systemctl --version
	I1227 20:49:09.216221  474910 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:49:09.263226  474910 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:49:09.268926  474910 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:49:09.269034  474910 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:49:09.298703  474910 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 20:49:09.298737  474910 start.go:496] detecting cgroup driver to use...
	I1227 20:49:09.298752  474910 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 20:49:09.298814  474910 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:49:09.315989  474910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:49:09.328518  474910 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:49:09.328581  474910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:49:09.346762  474910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:49:09.365413  474910 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:49:09.491738  474910 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:49:09.618150  474910 docker.go:234] disabling docker service ...
	I1227 20:49:09.618231  474910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:49:09.639553  474910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:49:09.653330  474910 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:49:09.778390  474910 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:49:09.889294  474910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:49:09.901618  474910 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:49:09.915792  474910 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:49:09.915908  474910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:49:09.925226  474910 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 20:49:09.925379  474910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:49:09.935164  474910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:49:09.943974  474910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:49:09.953748  474910 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:49:09.966112  474910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:49:09.979589  474910 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:49:10.000678  474910 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:49:10.012258  474910 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:49:10.022355  474910 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:49:10.031295  474910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:49:10.154813  474910 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:49:10.338602  474910 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:49:10.338676  474910 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:49:10.342897  474910 start.go:574] Will wait 60s for crictl version
	I1227 20:49:10.343002  474910 ssh_runner.go:195] Run: which crictl
	I1227 20:49:10.346693  474910 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:49:10.372684  474910 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:49:10.372778  474910 ssh_runner.go:195] Run: crio --version
	I1227 20:49:10.399215  474910 ssh_runner.go:195] Run: crio --version
	I1227 20:49:10.435295  474910 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:49:10.438131  474910 cli_runner.go:164] Run: docker network inspect force-systemd-flag-604544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:49:10.454699  474910 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 20:49:10.458640  474910 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:49:10.468676  474910 kubeadm.go:884] updating cluster {Name:force-systemd-flag-604544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-604544 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:49:10.468796  474910 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:49:10.468893  474910 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:49:10.503982  474910 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:49:10.504007  474910 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:49:10.504065  474910 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:49:10.529718  474910 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:49:10.529744  474910 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:49:10.529754  474910 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1227 20:49:10.529854  474910 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-604544 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-604544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:49:10.529938  474910 ssh_runner.go:195] Run: crio config
	I1227 20:49:10.606626  474910 cni.go:84] Creating CNI manager for ""
	I1227 20:49:10.606649  474910 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:49:10.606665  474910 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:49:10.606715  474910 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-604544 NodeName:force-systemd-flag-604544 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:49:10.606876  474910 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-604544"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:49:10.606967  474910 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:49:10.616670  474910 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:49:10.616764  474910 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:49:10.625181  474910 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1227 20:49:10.639445  474910 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:49:10.657431  474910 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1227 20:49:10.671341  474910 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:49:10.677421  474910 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:49:10.687919  474910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:49:10.867908  474910 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:49:10.883539  474910 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544 for IP: 192.168.85.2
	I1227 20:49:10.883558  474910 certs.go:195] generating shared ca certs ...
	I1227 20:49:10.883574  474910 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:49:10.883712  474910 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:49:10.883752  474910 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:49:10.883759  474910 certs.go:257] generating profile certs ...
	I1227 20:49:10.883814  474910 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/client.key
	I1227 20:49:10.883836  474910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/client.crt with IP's: []
	I1227 20:49:11.262785  474910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/client.crt ...
	I1227 20:49:11.262816  474910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/client.crt: {Name:mkf96b17aaf97305b13466fd33755c0f970b0c21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:49:11.263039  474910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/client.key ...
	I1227 20:49:11.263050  474910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/client.key: {Name:mk511a8dea4a2ef904f7cd41d2d54f88df016985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:49:11.263156  474910 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/apiserver.key.4a19d7b0
	I1227 20:49:11.263174  474910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/apiserver.crt.4a19d7b0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 20:49:11.616838  474910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/apiserver.crt.4a19d7b0 ...
	I1227 20:49:11.616916  474910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/apiserver.crt.4a19d7b0: {Name:mk9dcdb15c3dfc6895ce949989e6d29ee9f718ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:49:11.617131  474910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/apiserver.key.4a19d7b0 ...
	I1227 20:49:11.617180  474910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/apiserver.key.4a19d7b0: {Name:mk6fc15e3993e8e6ce42e312175f01c3da74dbdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:49:11.617302  474910 certs.go:382] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/apiserver.crt.4a19d7b0 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/apiserver.crt
	I1227 20:49:11.617429  474910 certs.go:386] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/apiserver.key.4a19d7b0 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/apiserver.key
	I1227 20:49:11.617550  474910 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/proxy-client.key
	I1227 20:49:11.617597  474910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/proxy-client.crt with IP's: []
	I1227 20:49:11.781277  474910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/proxy-client.crt ...
	I1227 20:49:11.781307  474910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/proxy-client.crt: {Name:mkbb0871cfb4feb63b281762e3574242067b5ab4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:49:11.781769  474910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/proxy-client.key ...
	I1227 20:49:11.781941  474910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/proxy-client.key: {Name:mk86188c6172af10a05a77831d2065faebb0c16d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:49:11.782149  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:49:11.782242  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:49:11.782786  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:49:11.782856  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:49:11.782891  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:49:11.782921  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:49:11.782967  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:49:11.783009  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:49:11.783107  474910 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:49:11.783168  474910 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:49:11.783208  474910 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:49:11.783261  474910 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:49:11.783324  474910 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:49:11.783373  474910 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:49:11.783463  474910 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:49:11.783521  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:49:11.783566  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:49:11.783601  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:49:11.786032  474910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:49:11.813744  474910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:49:11.834681  474910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:49:11.854325  474910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:49:11.873960  474910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 20:49:11.892632  474910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 20:49:11.912874  474910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:49:11.931764  474910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:49:11.956931  474910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:49:11.983349  474910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:49:12.012257  474910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:49:12.040475  474910 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:49:12.058708  474910 ssh_runner.go:195] Run: openssl version
	I1227 20:49:12.065534  474910 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:49:12.074619  474910 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:49:12.083745  474910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:49:12.087932  474910 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:49:12.088005  474910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:49:12.132637  474910 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:49:12.140117  474910 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 20:49:12.147701  474910 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:49:12.156885  474910 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:49:12.165249  474910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:49:12.169422  474910 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:49:12.169580  474910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:49:12.220696  474910 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:49:12.232919  474910 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/274336.pem /etc/ssl/certs/51391683.0
	I1227 20:49:12.242079  474910 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:49:12.252100  474910 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:49:12.260829  474910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:49:12.264949  474910 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:49:12.265009  474910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:49:12.317591  474910 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:49:12.326071  474910 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2743362.pem /etc/ssl/certs/3ec20f2e.0
	I1227 20:49:12.333929  474910 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:49:12.338078  474910 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:49:12.338129  474910 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-604544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-604544 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:49:12.338207  474910 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:49:12.338269  474910 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:49:12.372335  474910 cri.go:96] found id: ""
	I1227 20:49:12.372455  474910 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:49:12.381373  474910 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 20:49:12.390179  474910 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:49:12.390308  474910 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:49:12.401324  474910 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:49:12.401423  474910 kubeadm.go:158] found existing configuration files:
	
	I1227 20:49:12.401514  474910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:49:12.410747  474910 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:49:12.410864  474910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:49:12.419935  474910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:49:12.432509  474910 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:49:12.432569  474910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:49:12.441503  474910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:49:12.450041  474910 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:49:12.450110  474910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:49:12.465948  474910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:49:12.477765  474910 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:49:12.477832  474910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:49:12.499543  474910 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:49:12.566891  474910 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:49:12.566955  474910 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:49:12.673076  474910 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:49:12.673154  474910 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 20:49:12.673195  474910 kubeadm.go:319] OS: Linux
	I1227 20:49:12.673244  474910 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:49:12.673295  474910 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 20:49:12.673346  474910 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:49:12.673398  474910 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:49:12.673460  474910 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:49:12.673517  474910 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:49:12.673566  474910 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:49:12.673619  474910 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:49:12.673668  474910 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 20:49:12.763819  474910 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:49:12.763934  474910 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:49:12.764044  474910 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:49:12.790734  474910 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 20:49:12.801158  474910 out.go:252]   - Generating certificates and keys ...
	I1227 20:49:12.801273  474910 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:49:12.801340  474910 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:49:13.652994  474910 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 20:49:13.951226  474910 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 20:49:14.127441  474910 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 20:49:14.231105  474910 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 20:49:14.880930  474910 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 20:49:14.881425  474910 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-604544 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 20:49:15.156237  474910 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 20:49:15.157246  474910 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-604544 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 20:49:15.560403  474910 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 20:49:16.617624  474910 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 20:49:16.887652  474910 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 20:49:16.887728  474910 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:49:17.043609  474910 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:49:17.180742  474910 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:49:17.344586  474910 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:49:17.449815  474910 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:49:17.644927  474910 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:49:17.645027  474910 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:49:17.653814  474910 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:49:17.659451  474910 out.go:252]   - Booting up control plane ...
	I1227 20:49:17.659572  474910 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:49:17.659679  474910 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:49:17.659892  474910 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:49:17.677844  474910 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:49:17.677964  474910 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:49:17.685414  474910 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:49:17.685527  474910 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:49:17.685568  474910 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:49:17.830481  474910 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:49:17.830606  474910 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 20:53:17.830638  474910 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000454593s
	I1227 20:53:17.830670  474910 kubeadm.go:319] 
	I1227 20:53:17.830728  474910 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 20:53:17.830767  474910 kubeadm.go:319] 	- The kubelet is not running
	I1227 20:53:17.830876  474910 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 20:53:17.830890  474910 kubeadm.go:319] 
	I1227 20:53:17.831007  474910 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 20:53:17.831044  474910 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 20:53:17.831080  474910 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 20:53:17.831090  474910 kubeadm.go:319] 
	I1227 20:53:17.840852  474910 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 20:53:17.841422  474910 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:53:17.841601  474910 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:53:17.841886  474910 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 20:53:17.841897  474910 kubeadm.go:319] 
	I1227 20:53:17.842003  474910 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1227 20:53:17.842151  474910 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-604544 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-604544 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000454593s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-604544 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-604544 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000454593s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 20:53:17.842524  474910 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1227 20:53:18.297196  474910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:53:18.313794  474910 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:53:18.313858  474910 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:53:18.328548  474910 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:53:18.328563  474910 kubeadm.go:158] found existing configuration files:
	
	I1227 20:53:18.328612  474910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:53:18.337565  474910 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:53:18.337625  474910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:53:18.345514  474910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:53:18.361572  474910 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:53:18.361631  474910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:53:18.370675  474910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:53:18.380291  474910 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:53:18.380351  474910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:53:18.388318  474910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:53:18.397555  474910 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:53:18.397614  474910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:53:18.405927  474910 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:53:18.459925  474910 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:53:18.462148  474910 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:53:18.582764  474910 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:53:18.582834  474910 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 20:53:18.582869  474910 kubeadm.go:319] OS: Linux
	I1227 20:53:18.582914  474910 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:53:18.582963  474910 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 20:53:18.583009  474910 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:53:18.583058  474910 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:53:18.583106  474910 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:53:18.583156  474910 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:53:18.583202  474910 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:53:18.583249  474910 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:53:18.583295  474910 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 20:53:18.661397  474910 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:53:18.661593  474910 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:53:18.661692  474910 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:53:18.673550  474910 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 20:53:18.677119  474910 out.go:252]   - Generating certificates and keys ...
	I1227 20:53:18.677208  474910 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:53:18.677273  474910 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:53:18.677349  474910 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 20:53:18.677409  474910 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 20:53:18.677493  474910 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 20:53:18.677547  474910 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 20:53:18.677610  474910 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 20:53:18.677670  474910 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 20:53:18.677777  474910 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 20:53:18.677991  474910 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 20:53:18.678513  474910 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 20:53:18.678633  474910 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:53:18.896097  474910 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:53:19.004158  474910 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:53:19.445174  474910 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:53:19.781500  474910 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:53:20.081747  474910 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:53:20.082546  474910 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:53:20.087142  474910 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:53:20.092820  474910 out.go:252]   - Booting up control plane ...
	I1227 20:53:20.092958  474910 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:53:20.093054  474910 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:53:20.093131  474910 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:53:20.107486  474910 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:53:20.107602  474910 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:53:20.115013  474910 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:53:20.115305  474910 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:53:20.115524  474910 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:53:20.245870  474910 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:53:20.245997  474910 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 20:57:20.246170  474910 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000888474s
	I1227 20:57:20.250878  474910 kubeadm.go:319] 
	I1227 20:57:20.251004  474910 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 20:57:20.251065  474910 kubeadm.go:319] 	- The kubelet is not running
	I1227 20:57:20.251251  474910 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 20:57:20.251285  474910 kubeadm.go:319] 
	I1227 20:57:20.251471  474910 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 20:57:20.251529  474910 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 20:57:20.251585  474910 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 20:57:20.251590  474910 kubeadm.go:319] 
	I1227 20:57:20.263808  474910 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 20:57:20.264368  474910 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:57:20.264529  474910 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:57:20.264838  474910 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 20:57:20.264873  474910 kubeadm.go:319] 
	I1227 20:57:20.264978  474910 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 20:57:20.265072  474910 kubeadm.go:403] duration metric: took 8m7.926946713s to StartCluster
	I1227 20:57:20.265133  474910 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:57:20.265226  474910 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:57:20.308233  474910 cri.go:96] found id: ""
	I1227 20:57:20.308321  474910 logs.go:282] 0 containers: []
	W1227 20:57:20.308343  474910 logs.go:284] No container was found matching "kube-apiserver"
	I1227 20:57:20.308379  474910 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:57:20.308480  474910 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:57:20.342783  474910 cri.go:96] found id: ""
	I1227 20:57:20.342855  474910 logs.go:282] 0 containers: []
	W1227 20:57:20.342877  474910 logs.go:284] No container was found matching "etcd"
	I1227 20:57:20.342897  474910 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:57:20.342984  474910 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:57:20.375582  474910 cri.go:96] found id: ""
	I1227 20:57:20.375654  474910 logs.go:282] 0 containers: []
	W1227 20:57:20.375676  474910 logs.go:284] No container was found matching "coredns"
	I1227 20:57:20.375696  474910 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:57:20.375782  474910 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:57:20.411280  474910 cri.go:96] found id: ""
	I1227 20:57:20.411359  474910 logs.go:282] 0 containers: []
	W1227 20:57:20.411382  474910 logs.go:284] No container was found matching "kube-scheduler"
	I1227 20:57:20.411403  474910 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:57:20.411514  474910 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:57:20.443201  474910 cri.go:96] found id: ""
	I1227 20:57:20.443281  474910 logs.go:282] 0 containers: []
	W1227 20:57:20.443303  474910 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:57:20.443326  474910 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:57:20.443432  474910 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:57:20.471796  474910 cri.go:96] found id: ""
	I1227 20:57:20.471871  474910 logs.go:282] 0 containers: []
	W1227 20:57:20.471907  474910 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 20:57:20.471933  474910 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:57:20.472020  474910 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:57:20.503811  474910 cri.go:96] found id: ""
	I1227 20:57:20.503890  474910 logs.go:282] 0 containers: []
	W1227 20:57:20.503914  474910 logs.go:284] No container was found matching "kindnet"
	I1227 20:57:20.503938  474910 logs.go:123] Gathering logs for kubelet ...
	I1227 20:57:20.503984  474910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:57:20.582388  474910 logs.go:123] Gathering logs for dmesg ...
	I1227 20:57:20.582471  474910 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:57:20.606401  474910 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:57:20.606473  474910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:57:20.684420  474910 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:57:20.675228    4891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:57:20.675985    4891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:57:20.677695    4891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:57:20.678005    4891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:57:20.680079    4891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:57:20.675228    4891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:57:20.675985    4891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:57:20.677695    4891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:57:20.678005    4891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:57:20.680079    4891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:57:20.684486  474910 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:57:20.684522  474910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:57:20.722115  474910 logs.go:123] Gathering logs for container status ...
	I1227 20:57:20.722195  474910 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1227 20:57:20.769054  474910 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888474s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 20:57:20.769161  474910 out.go:285] * 
	* 
	W1227 20:57:20.769358  474910 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888474s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888474s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 20:57:20.769415  474910 out.go:285] * 
	* 
	W1227 20:57:20.769730  474910 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:57:20.776045  474910 out.go:203] 
	W1227 20:57:20.780003  474910 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888474s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888474s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 20:57:20.780151  474910 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 20:57:20.780223  474910 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 20:57:20.783377  474910 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-604544 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 109
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-604544 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-27 20:57:21.232343524 +0000 UTC m=+3718.737125385
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-604544
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-604544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c2e2ce44d580f099b141d687fb2e6ed91418dbcb5f1479605f0f24a1cc32c9fe",
	        "Created": "2025-12-27T20:49:03.200425398Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 475344,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:49:03.275508643Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/c2e2ce44d580f099b141d687fb2e6ed91418dbcb5f1479605f0f24a1cc32c9fe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c2e2ce44d580f099b141d687fb2e6ed91418dbcb5f1479605f0f24a1cc32c9fe/hostname",
	        "HostsPath": "/var/lib/docker/containers/c2e2ce44d580f099b141d687fb2e6ed91418dbcb5f1479605f0f24a1cc32c9fe/hosts",
	        "LogPath": "/var/lib/docker/containers/c2e2ce44d580f099b141d687fb2e6ed91418dbcb5f1479605f0f24a1cc32c9fe/c2e2ce44d580f099b141d687fb2e6ed91418dbcb5f1479605f0f24a1cc32c9fe-json.log",
	        "Name": "/force-systemd-flag-604544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-604544:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-604544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c2e2ce44d580f099b141d687fb2e6ed91418dbcb5f1479605f0f24a1cc32c9fe",
	                "LowerDir": "/var/lib/docker/overlay2/c23220f587995033a9a5b179f54de6bf3b6a35e7ff2454307d51902dbfeadff1-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c23220f587995033a9a5b179f54de6bf3b6a35e7ff2454307d51902dbfeadff1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c23220f587995033a9a5b179f54de6bf3b6a35e7ff2454307d51902dbfeadff1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c23220f587995033a9a5b179f54de6bf3b6a35e7ff2454307d51902dbfeadff1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-604544",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-604544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-604544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-604544",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-604544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36d7bccf6fa167a6504259433ed6cc06a08054e6d645f77048785de457620e3d",
	            "SandboxKey": "/var/run/docker/netns/36d7bccf6fa1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33398"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33399"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33402"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33400"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33401"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-604544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:17:d3:4e:ab:0d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "204503c8e8e5d0937b11eeb6e90714601416ef33b42928c8e78ca73abf584865",
	                    "EndpointID": "e3984163c7e3dfdcc8a81a78d383a8deb51ba82201d1683382d99fa3093d2352",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-604544",
	                        "c2e2ce44d580"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-604544 -n force-systemd-flag-604544
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-604544 -n force-systemd-flag-604544: exit status 6 (371.376763ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 20:57:21.614275  507036 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-604544" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-604544 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p force-systemd-flag-604544 logs -n 25: (1.000243837s)
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ old-k8s-version-855707 image list --format=json                                                                                                                          │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ pause   │ -p old-k8s-version-855707 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │                     │
	│ delete  │ -p old-k8s-version-855707                                                                                                                                                │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ delete  │ -p old-k8s-version-855707                                                                                                                                                │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ start   │ -p default-k8s-diff-port-058924 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:53 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-058924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-058924 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-058924 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
	│ start   │ -p default-k8s-diff-port-058924 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:54 UTC │
	│ image   │ default-k8s-diff-port-058924 image list --format=json                                                                                                                    │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ pause   │ -p default-k8s-diff-port-058924 --alsologtostderr -v=1                                                                                                                   │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-058924                                                                                                                                          │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ delete  │ -p default-k8s-diff-port-058924                                                                                                                                          │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ start   │ -p embed-certs-193865 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-193865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │                     │
	│ stop    │ -p embed-certs-193865 --alsologtostderr -v=3                                                                                                                             │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │ 27 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p embed-certs-193865 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │ 27 Dec 25 20:55 UTC │
	│ start   │ -p embed-certs-193865 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │ 27 Dec 25 20:56 UTC │
	│ image   │ embed-certs-193865 image list --format=json                                                                                                                              │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:56 UTC │ 27 Dec 25 20:56 UTC │
	│ pause   │ -p embed-certs-193865 --alsologtostderr -v=1                                                                                                                             │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:56 UTC │                     │
	│ delete  │ -p embed-certs-193865                                                                                                                                                    │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ delete  │ -p embed-certs-193865                                                                                                                                                    │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ delete  │ -p disable-driver-mounts-371621                                                                                                                                          │ disable-driver-mounts-371621 │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ start   │ -p no-preload-542467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                  │ no-preload-542467            │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │                     │
	│ ssh     │ force-systemd-flag-604544 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                     │ force-systemd-flag-604544    │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:57:04
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:57:04.080407  504634 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:57:04.080557  504634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:57:04.080583  504634 out.go:374] Setting ErrFile to fd 2...
	I1227 20:57:04.080601  504634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:57:04.080872  504634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:57:04.081354  504634 out.go:368] Setting JSON to false
	I1227 20:57:04.082260  504634 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9576,"bootTime":1766859448,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:57:04.082339  504634 start.go:143] virtualization:  
	I1227 20:57:04.086329  504634 out.go:179] * [no-preload-542467] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:57:04.089776  504634 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:57:04.089895  504634 notify.go:221] Checking for updates...
	I1227 20:57:04.096179  504634 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:57:04.099420  504634 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:57:04.102409  504634 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:57:04.105474  504634 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:57:04.108560  504634 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:57:04.112049  504634 config.go:182] Loaded profile config "force-systemd-flag-604544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:57:04.112228  504634 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:57:04.141583  504634 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:57:04.141742  504634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:57:04.207117  504634 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:57:04.198395162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:57:04.207229  504634 docker.go:319] overlay module found
	I1227 20:57:04.210521  504634 out.go:179] * Using the docker driver based on user configuration
	I1227 20:57:04.213623  504634 start.go:309] selected driver: docker
	I1227 20:57:04.213650  504634 start.go:928] validating driver "docker" against <nil>
	I1227 20:57:04.213665  504634 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:57:04.214413  504634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:57:04.269900  504634 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:57:04.26058478 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:57:04.270047  504634 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 20:57:04.270274  504634 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:57:04.273209  504634 out.go:179] * Using Docker driver with root privileges
	I1227 20:57:04.276014  504634 cni.go:84] Creating CNI manager for ""
	I1227 20:57:04.276084  504634 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:57:04.276099  504634 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 20:57:04.276169  504634 start.go:353] cluster config:
	{Name:no-preload-542467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-542467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:57:04.279319  504634 out.go:179] * Starting "no-preload-542467" primary control-plane node in "no-preload-542467" cluster
	I1227 20:57:04.282174  504634 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:57:04.285118  504634 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:57:04.287974  504634 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:57:04.288041  504634 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:57:04.288137  504634 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/config.json ...
	I1227 20:57:04.288190  504634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/config.json: {Name:mkce5b24836feac439873d29301e638629b60f32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:57:04.288444  504634 cache.go:107] acquiring lock: {Name:mk49d2801297fbd0ac942585f01f4934960e7be0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:57:04.288515  504634 cache.go:115] /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1227 20:57:04.288529  504634 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 91.633µs
	I1227 20:57:04.288537  504634 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1227 20:57:04.288554  504634 cache.go:107] acquiring lock: {Name:mk57ec4ea0677ff3978b78d099897da421a88bcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:57:04.288595  504634 cache.go:115] /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1227 20:57:04.288604  504634 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 51.773µs
	I1227 20:57:04.288611  504634 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1227 20:57:04.288621  504634 cache.go:107] acquiring lock: {Name:mk0180a2bef806cd826880bd91102414c57d194c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:57:04.288652  504634 cache.go:115] /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1227 20:57:04.288662  504634 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 41.927µs
	I1227 20:57:04.288669  504634 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1227 20:57:04.288678  504634 cache.go:107] acquiring lock: {Name:mk10b17fce2ed7fe50c6773b11f0279ccc111b2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:57:04.288709  504634 cache.go:115] /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1227 20:57:04.288719  504634 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 42.009µs
	I1227 20:57:04.288738  504634 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1227 20:57:04.288747  504634 cache.go:107] acquiring lock: {Name:mked2eed6541133cb693fb44794f23eb22f88455 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:57:04.288788  504634 cache.go:115] /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1227 20:57:04.288797  504634 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 50.468µs
	I1227 20:57:04.288803  504634 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1227 20:57:04.288812  504634 cache.go:107] acquiring lock: {Name:mkbc004a239c7edae53d9c77df47817b087fc512 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:57:04.288868  504634 cache.go:115] /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1227 20:57:04.288879  504634 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 67.584µs
	I1227 20:57:04.288885  504634 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1227 20:57:04.288895  504634 cache.go:107] acquiring lock: {Name:mk228edfb5bd736e0cf31bde3aae8af60895f7b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:57:04.288930  504634 cache.go:115] /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1227 20:57:04.288940  504634 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 46.317µs
	I1227 20:57:04.288946  504634 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1227 20:57:04.288954  504634 cache.go:107] acquiring lock: {Name:mk752104301746b978c1bee66759df528e70e868 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:57:04.288984  504634 cache.go:115] /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1227 20:57:04.289011  504634 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 40.007µs
	I1227 20:57:04.289021  504634 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1227 20:57:04.289029  504634 cache.go:87] Successfully saved all images to host disk.
	I1227 20:57:04.308778  504634 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:57:04.308801  504634 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:57:04.308817  504634 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:57:04.308846  504634 start.go:360] acquireMachinesLock for no-preload-542467: {Name:mk114c6a1688b2871aa3ca20c6447ce0cbe2c754 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:57:04.308957  504634 start.go:364] duration metric: took 91.009µs to acquireMachinesLock for "no-preload-542467"
	I1227 20:57:04.308988  504634 start.go:93] Provisioning new machine with config: &{Name:no-preload-542467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-542467 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:57:04.309068  504634 start.go:125] createHost starting for "" (driver="docker")
	I1227 20:57:04.314461  504634 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:57:04.314699  504634 start.go:159] libmachine.API.Create for "no-preload-542467" (driver="docker")
	I1227 20:57:04.314740  504634 client.go:173] LocalClient.Create starting
	I1227 20:57:04.314802  504634 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem
	I1227 20:57:04.314840  504634 main.go:144] libmachine: Decoding PEM data...
	I1227 20:57:04.314859  504634 main.go:144] libmachine: Parsing certificate...
	I1227 20:57:04.314912  504634 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem
	I1227 20:57:04.314937  504634 main.go:144] libmachine: Decoding PEM data...
	I1227 20:57:04.314954  504634 main.go:144] libmachine: Parsing certificate...
	I1227 20:57:04.315322  504634 cli_runner.go:164] Run: docker network inspect no-preload-542467 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:57:04.331175  504634 cli_runner.go:211] docker network inspect no-preload-542467 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:57:04.331257  504634 network_create.go:284] running [docker network inspect no-preload-542467] to gather additional debugging logs...
	I1227 20:57:04.331281  504634 cli_runner.go:164] Run: docker network inspect no-preload-542467
	W1227 20:57:04.345779  504634 cli_runner.go:211] docker network inspect no-preload-542467 returned with exit code 1
	I1227 20:57:04.345814  504634 network_create.go:287] error running [docker network inspect no-preload-542467]: docker network inspect no-preload-542467: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-542467 not found
	I1227 20:57:04.345828  504634 network_create.go:289] output of [docker network inspect no-preload-542467]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-542467 not found
	
	** /stderr **
	I1227 20:57:04.345921  504634 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:57:04.362032  504634 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9521cb9225c5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:1d:ef:38:b7:a6} reservation:<nil>}
	I1227 20:57:04.362440  504634 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-68d11cc2ab47 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:8d:ad:37:cb:fe} reservation:<nil>}
	I1227 20:57:04.362707  504634 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d3b7cfff4895 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:4a:e3:08:10:2f} reservation:<nil>}
	I1227 20:57:04.363167  504634 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019bdb10}
	I1227 20:57:04.363191  504634 network_create.go:124] attempt to create docker network no-preload-542467 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 20:57:04.363255  504634 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-542467 no-preload-542467
	I1227 20:57:04.420652  504634 network_create.go:108] docker network no-preload-542467 192.168.76.0/24 created
	I1227 20:57:04.420685  504634 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-542467" container
	I1227 20:57:04.420760  504634 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:57:04.436022  504634 cli_runner.go:164] Run: docker volume create no-preload-542467 --label name.minikube.sigs.k8s.io=no-preload-542467 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:57:04.453779  504634 oci.go:103] Successfully created a docker volume no-preload-542467
	I1227 20:57:04.453879  504634 cli_runner.go:164] Run: docker run --rm --name no-preload-542467-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-542467 --entrypoint /usr/bin/test -v no-preload-542467:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:57:04.978185  504634 oci.go:107] Successfully prepared a docker volume no-preload-542467
	I1227 20:57:04.978253  504634 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	W1227 20:57:04.978370  504634 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 20:57:04.978483  504634 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:57:05.033725  504634 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-542467 --name no-preload-542467 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-542467 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-542467 --network no-preload-542467 --ip 192.168.76.2 --volume no-preload-542467:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:57:05.377857  504634 cli_runner.go:164] Run: docker container inspect no-preload-542467 --format={{.State.Running}}
	I1227 20:57:05.398602  504634 cli_runner.go:164] Run: docker container inspect no-preload-542467 --format={{.State.Status}}
	I1227 20:57:05.418699  504634 cli_runner.go:164] Run: docker exec no-preload-542467 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:57:05.467932  504634 oci.go:144] the created container "no-preload-542467" has a running status.
	I1227 20:57:05.467958  504634 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/no-preload-542467/id_rsa...
	I1227 20:57:05.562728  504634 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-272475/.minikube/machines/no-preload-542467/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:57:05.595070  504634 cli_runner.go:164] Run: docker container inspect no-preload-542467 --format={{.State.Status}}
	I1227 20:57:05.620532  504634 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:57:05.620552  504634 kic_runner.go:114] Args: [docker exec --privileged no-preload-542467 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:57:05.668061  504634 cli_runner.go:164] Run: docker container inspect no-preload-542467 --format={{.State.Status}}
	I1227 20:57:05.689271  504634 machine.go:94] provisionDockerMachine start ...
	I1227 20:57:05.689351  504634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-542467
	I1227 20:57:05.709370  504634 main.go:144] libmachine: Using SSH client type: native
	I1227 20:57:05.709730  504634 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1227 20:57:05.709740  504634 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:57:05.713277  504634 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59260->127.0.0.1:33438: read: connection reset by peer
	I1227 20:57:08.857026  504634 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-542467
	
	I1227 20:57:08.857051  504634 ubuntu.go:182] provisioning hostname "no-preload-542467"
	I1227 20:57:08.857126  504634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-542467
	I1227 20:57:08.874949  504634 main.go:144] libmachine: Using SSH client type: native
	I1227 20:57:08.875257  504634 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1227 20:57:08.875273  504634 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-542467 && echo "no-preload-542467" | sudo tee /etc/hostname
	I1227 20:57:09.028086  504634 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-542467
	
	I1227 20:57:09.028167  504634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-542467
	I1227 20:57:09.045287  504634 main.go:144] libmachine: Using SSH client type: native
	I1227 20:57:09.045628  504634 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1227 20:57:09.045652  504634 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-542467' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-542467/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-542467' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:57:09.185748  504634 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:57:09.185775  504634 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:57:09.185794  504634 ubuntu.go:190] setting up certificates
	I1227 20:57:09.185803  504634 provision.go:84] configureAuth start
	I1227 20:57:09.185862  504634 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-542467
	I1227 20:57:09.202878  504634 provision.go:143] copyHostCerts
	I1227 20:57:09.202941  504634 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:57:09.202955  504634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:57:09.203038  504634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:57:09.203136  504634 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:57:09.203159  504634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:57:09.203193  504634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:57:09.203252  504634 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:57:09.203262  504634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:57:09.203288  504634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:57:09.203343  504634 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.no-preload-542467 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-542467]
	I1227 20:57:09.373998  504634 provision.go:177] copyRemoteCerts
	I1227 20:57:09.374071  504634 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:57:09.374123  504634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-542467
	I1227 20:57:09.394318  504634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/no-preload-542467/id_rsa Username:docker}
	I1227 20:57:09.493261  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:57:09.510812  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:57:09.527535  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:57:09.544703  504634 provision.go:87] duration metric: took 358.877808ms to configureAuth
	I1227 20:57:09.544786  504634 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:57:09.545000  504634 config.go:182] Loaded profile config "no-preload-542467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:57:09.545109  504634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-542467
	I1227 20:57:09.562740  504634 main.go:144] libmachine: Using SSH client type: native
	I1227 20:57:09.563057  504634 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1227 20:57:09.563077  504634 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:57:09.865032  504634 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:57:09.865099  504634 machine.go:97] duration metric: took 4.175809343s to provisionDockerMachine
	I1227 20:57:09.865125  504634 client.go:176] duration metric: took 5.550374453s to LocalClient.Create
	I1227 20:57:09.865183  504634 start.go:167] duration metric: took 5.550482609s to libmachine.API.Create "no-preload-542467"
	I1227 20:57:09.865198  504634 start.go:293] postStartSetup for "no-preload-542467" (driver="docker")
	I1227 20:57:09.865208  504634 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:57:09.865271  504634 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:57:09.865327  504634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-542467
	I1227 20:57:09.882413  504634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/no-preload-542467/id_rsa Username:docker}
	I1227 20:57:09.981193  504634 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:57:09.984351  504634 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:57:09.984383  504634 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:57:09.984395  504634 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:57:09.984445  504634 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:57:09.984523  504634 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:57:09.984624  504634 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:57:09.991704  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:57:10.008891  504634 start.go:296] duration metric: took 143.675611ms for postStartSetup
	I1227 20:57:10.009343  504634 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-542467
	I1227 20:57:10.030080  504634 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/config.json ...
	I1227 20:57:10.030379  504634 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:57:10.030447  504634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-542467
	I1227 20:57:10.048409  504634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/no-preload-542467/id_rsa Username:docker}
	I1227 20:57:10.146438  504634 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:57:10.150960  504634 start.go:128] duration metric: took 5.841876249s to createHost
	I1227 20:57:10.150989  504634 start.go:83] releasing machines lock for "no-preload-542467", held for 5.842017143s
	I1227 20:57:10.151059  504634 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-542467
	I1227 20:57:10.167642  504634 ssh_runner.go:195] Run: cat /version.json
	I1227 20:57:10.167698  504634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-542467
	I1227 20:57:10.167999  504634 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:57:10.168060  504634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-542467
	I1227 20:57:10.196097  504634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/no-preload-542467/id_rsa Username:docker}
	I1227 20:57:10.198105  504634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/no-preload-542467/id_rsa Username:docker}
	I1227 20:57:10.389643  504634 ssh_runner.go:195] Run: systemctl --version
	I1227 20:57:10.395952  504634 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:57:10.431945  504634 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:57:10.436310  504634 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:57:10.436380  504634 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:57:10.466327  504634 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 20:57:10.466352  504634 start.go:496] detecting cgroup driver to use...
	I1227 20:57:10.466383  504634 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:57:10.466431  504634 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:57:10.485875  504634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:57:10.499796  504634 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:57:10.499857  504634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:57:10.516917  504634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:57:10.540801  504634 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:57:10.661203  504634 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:57:10.786749  504634 docker.go:234] disabling docker service ...
	I1227 20:57:10.786815  504634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:57:10.808781  504634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:57:10.822430  504634 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:57:10.938753  504634 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:57:11.072457  504634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:57:11.086433  504634 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:57:11.100040  504634 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:57:11.100122  504634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:57:11.109255  504634 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:57:11.109337  504634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:57:11.118691  504634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:57:11.128126  504634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:57:11.137509  504634 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:57:11.146367  504634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:57:11.155297  504634 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:57:11.169049  504634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:57:11.178274  504634 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:57:11.185800  504634 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:57:11.193104  504634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:57:11.334718  504634 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:57:11.507979  504634 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:57:11.508111  504634 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:57:11.512251  504634 start.go:574] Will wait 60s for crictl version
	I1227 20:57:11.512350  504634 ssh_runner.go:195] Run: which crictl
	I1227 20:57:11.516159  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:57:11.539923  504634 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:57:11.540049  504634 ssh_runner.go:195] Run: crio --version
	I1227 20:57:11.570060  504634 ssh_runner.go:195] Run: crio --version
	I1227 20:57:11.603544  504634 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:57:11.606314  504634 cli_runner.go:164] Run: docker network inspect no-preload-542467 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:57:11.622184  504634 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 20:57:11.625997  504634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:57:11.636148  504634 kubeadm.go:884] updating cluster {Name:no-preload-542467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-542467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:57:11.636269  504634 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:57:11.636317  504634 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:57:11.661120  504634 crio.go:557] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0". assuming images are not preloaded.
	I1227 20:57:11.661144  504634 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0 registry.k8s.io/kube-controller-manager:v1.35.0 registry.k8s.io/kube-scheduler:v1.35.0 registry.k8s.io/kube-proxy:v1.35.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1227 20:57:11.661191  504634 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:57:11.661403  504634 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0
	I1227 20:57:11.661523  504634 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 20:57:11.661660  504634 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0
	I1227 20:57:11.661757  504634 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0
	I1227 20:57:11.661868  504634 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1227 20:57:11.661966  504634 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1227 20:57:11.662073  504634 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1227 20:57:11.663050  504634 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0
	I1227 20:57:11.663289  504634 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1227 20:57:11.663351  504634 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0
	I1227 20:57:11.663422  504634 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1227 20:57:11.663462  504634 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:57:11.663529  504634 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1227 20:57:11.663581  504634 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 20:57:11.663623  504634 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0
	I1227 20:57:12.003232  504634 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0
	I1227 20:57:12.003656  504634 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1227 20:57:12.014499  504634 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 20:57:12.015405  504634 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1227 20:57:12.019703  504634 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1227 20:57:12.063922  504634 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0
	I1227 20:57:12.076804  504634 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0
	I1227 20:57:12.127786  504634 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1227 20:57:12.127881  504634 cri.go:226] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1227 20:57:12.127962  504634 ssh_runner.go:195] Run: which crictl
	I1227 20:57:12.128065  504634 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0" does not exist at hash "ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f" in container runtime
	I1227 20:57:12.128113  504634 cri.go:226] Removing image: registry.k8s.io/kube-scheduler:v1.35.0
	I1227 20:57:12.128155  504634 ssh_runner.go:195] Run: which crictl
	I1227 20:57:12.185107  504634 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0" does not exist at hash "88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0" in container runtime
	I1227 20:57:12.185149  504634 cri.go:226] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 20:57:12.185199  504634 ssh_runner.go:195] Run: which crictl
	I1227 20:57:12.185261  504634 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1227 20:57:12.185283  504634 cri.go:226] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1227 20:57:12.185314  504634 ssh_runner.go:195] Run: which crictl
	I1227 20:57:12.185379  504634 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1227 20:57:12.185398  504634 cri.go:226] Removing image: registry.k8s.io/pause:3.10.1
	I1227 20:57:12.185418  504634 ssh_runner.go:195] Run: which crictl
	I1227 20:57:12.185500  504634 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0" does not exist at hash "c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856" in container runtime
	I1227 20:57:12.185522  504634 cri.go:226] Removing image: registry.k8s.io/kube-apiserver:v1.35.0
	I1227 20:57:12.185545  504634 ssh_runner.go:195] Run: which crictl
	I1227 20:57:12.191095  504634 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0" does not exist at hash "de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5" in container runtime
	I1227 20:57:12.191141  504634 cri.go:226] Removing image: registry.k8s.io/kube-proxy:v1.35.0
	I1227 20:57:12.191189  504634 ssh_runner.go:195] Run: which crictl
	I1227 20:57:12.191264  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1227 20:57:12.191313  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1227 20:57:12.196268  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 20:57:12.196351  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1227 20:57:12.196407  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1227 20:57:12.198698  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1227 20:57:12.265591  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1227 20:57:12.265674  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1227 20:57:12.265739  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1227 20:57:12.312921  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1227 20:57:12.313020  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 20:57:12.313088  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1227 20:57:12.313145  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1227 20:57:12.354734  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1227 20:57:12.354812  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1227 20:57:12.354876  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1227 20:57:12.396871  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1227 20:57:12.396963  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1227 20:57:12.397020  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 20:57:12.434159  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1227 20:57:12.456594  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1227 20:57:12.456672  504634 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0
	I1227 20:57:12.456740  504634 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1227 20:57:12.456795  504634 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1227 20:57:12.456841  504634 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1227 20:57:12.498719  504634 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1227 20:57:12.498828  504634 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1227 20:57:12.498901  504634 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1227 20:57:12.498956  504634 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1227 20:57:12.499181  504634 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0
	I1227 20:57:12.499244  504634 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1227 20:57:12.508406  504634 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0
	I1227 20:57:12.508504  504634 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1227 20:57:12.508612  504634 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0': No such file or directory
	I1227 20:57:12.508629  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0 (15415808 bytes)
	I1227 20:57:12.508670  504634 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1227 20:57:12.508684  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1227 20:57:12.556348  504634 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1227 20:57:12.556387  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1227 20:57:12.556461  504634 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0
	I1227 20:57:12.556537  504634 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0
	I1227 20:57:12.556601  504634 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1227 20:57:12.556616  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1227 20:57:12.556672  504634 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0': No such file or directory
	I1227 20:57:12.556685  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0 (20682752 bytes)
	I1227 20:57:12.556727  504634 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0': No such file or directory
	I1227 20:57:12.556740  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0 (24702976 bytes)
	W1227 20:57:12.588354  504634 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1227 20:57:12.588461  504634 retry.go:84] will retry after 200ms: ssh: rejected: connect failed (open failed)
	I1227 20:57:12.644422  504634 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0': No such file or directory
	I1227 20:57:12.644455  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0 (22434816 bytes)
	I1227 20:57:12.644509  504634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-542467
	I1227 20:57:12.672900  504634 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1227 20:57:12.672998  504634 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1227 20:57:12.673196  504634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-542467
	I1227 20:57:12.680133  504634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/no-preload-542467/id_rsa Username:docker}
	I1227 20:57:12.703766  504634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/no-preload-542467/id_rsa Username:docker}
	W1227 20:57:12.967990  504634 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1227 20:57:12.968164  504634 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:57:13.304187  504634 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1227 20:57:13.304228  504634 cri.go:226] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:57:13.304275  504634 ssh_runner.go:195] Run: which crictl
	I1227 20:57:13.304451  504634 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1227 20:57:13.304473  504634 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1227 20:57:13.304512  504634 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1227 20:57:13.343086  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:57:14.948499  504634 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0: (1.64396462s)
	I1227 20:57:14.948583  504634 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 from cache
	I1227 20:57:14.948614  504634 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0
	I1227 20:57:14.948547  504634 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.605420947s)
	I1227 20:57:14.948699  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:57:14.948789  504634 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0
	I1227 20:57:16.342984  504634 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0: (1.394168899s)
	I1227 20:57:16.343012  504634 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 from cache
	I1227 20:57:16.343011  504634 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.394289232s)
	I1227 20:57:16.343031  504634 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1227 20:57:16.343075  504634 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1227 20:57:16.343076  504634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:57:17.582632  504634 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.2394803s)
	I1227 20:57:17.582681  504634 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1227 20:57:17.582778  504634 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1227 20:57:17.582802  504634 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.239712998s)
	I1227 20:57:17.582817  504634 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22332-272475/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1227 20:57:17.582834  504634 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1227 20:57:17.582871  504634 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I1227 20:57:20.246170  474910 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000888474s
	I1227 20:57:20.250878  474910 kubeadm.go:319] 
	I1227 20:57:20.251004  474910 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 20:57:20.251065  474910 kubeadm.go:319] 	- The kubelet is not running
	I1227 20:57:20.251251  474910 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 20:57:20.251285  474910 kubeadm.go:319] 
	I1227 20:57:20.251471  474910 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 20:57:20.251529  474910 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 20:57:20.251585  474910 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 20:57:20.251590  474910 kubeadm.go:319] 
	I1227 20:57:20.263808  474910 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 20:57:20.264368  474910 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:57:20.264529  474910 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:57:20.264838  474910 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 20:57:20.264873  474910 kubeadm.go:319] 
	I1227 20:57:20.264978  474910 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 20:57:20.265072  474910 kubeadm.go:403] duration metric: took 8m7.926946713s to StartCluster
	I1227 20:57:20.265133  474910 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:57:20.265226  474910 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:57:20.308233  474910 cri.go:96] found id: ""
	I1227 20:57:20.308321  474910 logs.go:282] 0 containers: []
	W1227 20:57:20.308343  474910 logs.go:284] No container was found matching "kube-apiserver"
	I1227 20:57:20.308379  474910 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:57:20.308480  474910 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:57:20.342783  474910 cri.go:96] found id: ""
	I1227 20:57:20.342855  474910 logs.go:282] 0 containers: []
	W1227 20:57:20.342877  474910 logs.go:284] No container was found matching "etcd"
	I1227 20:57:20.342897  474910 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:57:20.342984  474910 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:57:20.375582  474910 cri.go:96] found id: ""
	I1227 20:57:20.375654  474910 logs.go:282] 0 containers: []
	W1227 20:57:20.375676  474910 logs.go:284] No container was found matching "coredns"
	I1227 20:57:20.375696  474910 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:57:20.375782  474910 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:57:20.411280  474910 cri.go:96] found id: ""
	I1227 20:57:20.411359  474910 logs.go:282] 0 containers: []
	W1227 20:57:20.411382  474910 logs.go:284] No container was found matching "kube-scheduler"
	I1227 20:57:20.411403  474910 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:57:20.411514  474910 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:57:20.443201  474910 cri.go:96] found id: ""
	I1227 20:57:20.443281  474910 logs.go:282] 0 containers: []
	W1227 20:57:20.443303  474910 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:57:20.443326  474910 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:57:20.443432  474910 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:57:20.471796  474910 cri.go:96] found id: ""
	I1227 20:57:20.471871  474910 logs.go:282] 0 containers: []
	W1227 20:57:20.471907  474910 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 20:57:20.471933  474910 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:57:20.472020  474910 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:57:20.503811  474910 cri.go:96] found id: ""
	I1227 20:57:20.503890  474910 logs.go:282] 0 containers: []
	W1227 20:57:20.503914  474910 logs.go:284] No container was found matching "kindnet"
	I1227 20:57:20.503938  474910 logs.go:123] Gathering logs for kubelet ...
	I1227 20:57:20.503984  474910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:57:20.582388  474910 logs.go:123] Gathering logs for dmesg ...
	I1227 20:57:20.582471  474910 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:57:20.606401  474910 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:57:20.606473  474910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:57:20.684420  474910 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:57:20.675228    4891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:57:20.675985    4891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:57:20.677695    4891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:57:20.678005    4891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:57:20.680079    4891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:57:20.675228    4891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:57:20.675985    4891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:57:20.677695    4891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:57:20.678005    4891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:57:20.680079    4891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:57:20.684486  474910 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:57:20.684522  474910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:57:20.722115  474910 logs.go:123] Gathering logs for container status ...
	I1227 20:57:20.722195  474910 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1227 20:57:20.769054  474910 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888474s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 20:57:20.769161  474910 out.go:285] * 
	W1227 20:57:20.769358  474910 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888474s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 20:57:20.769415  474910 out.go:285] * 
	W1227 20:57:20.769730  474910 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:57:20.776045  474910 out.go:203] 
	W1227 20:57:20.780003  474910 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888474s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 20:57:20.780151  474910 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 20:57:20.780223  474910 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 20:57:20.783377  474910 out.go:203] 
	
	
	==> CRI-O <==
	Dec 27 20:49:10 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:49:10.330444465Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 27 20:49:10 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:49:10.33059971Z" level=info msg="Starting seccomp notifier watcher"
	Dec 27 20:49:10 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:49:10.33065442Z" level=info msg="Create NRI interface"
	Dec 27 20:49:10 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:49:10.33074405Z" level=info msg="built-in NRI default validator is disabled"
	Dec 27 20:49:10 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:49:10.330752427Z" level=info msg="runtime interface created"
	Dec 27 20:49:10 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:49:10.330765096Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 27 20:49:10 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:49:10.330771799Z" level=info msg="runtime interface starting up..."
	Dec 27 20:49:10 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:49:10.330778437Z" level=info msg="starting plugins..."
	Dec 27 20:49:10 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:49:10.330791121Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 27 20:49:10 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:49:10.330852322Z" level=info msg="No systemd watchdog enabled"
	Dec 27 20:49:10 force-systemd-flag-604544 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 27 20:49:12 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:49:12.774675473Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=61ab1407-68ad-4450-aa44-6c547fe6b703 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:49:12 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:49:12.776076431Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=82d00476-584f-4c36-8dcc-82110448e042 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:49:12 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:49:12.777811805Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=3df3961f-2881-4dac-bcc7-f82feeb9866d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:49:12 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:49:12.778454985Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=ce379dea-ed27-42e3-bd16-c93206a65383 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:49:12 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:49:12.78173839Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0e5f2a1f-e304-46f9-acaf-2e691eb6f0c4 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:49:12 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:49:12.782439193Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=4ec52b55-81c2-4f9e-84e7-79911efa88e9 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:49:12 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:49:12.783167786Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=250984f4-1a17-4cc1-a72a-2fed59469035 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:53:18 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:53:18.66870581Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=b33b4287-68d7-457f-85c0-5dbd1090069d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:53:18 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:53:18.669460266Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=8f92a16f-30fa-4f22-8f2f-041555d77f6a name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:53:18 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:53:18.670018068Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=da8a9b2f-893c-4753-b23a-46dc2586b094 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:53:18 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:53:18.670480397Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=e0cba8f4-7a82-4df9-ab88-7a4dcdadcf1e name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:53:18 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:53:18.671112519Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=16d12640-d85b-4420-ada0-0322a60f5dea name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:53:18 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:53:18.671719919Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=6647b4ff-0d8e-41e1-8ba7-91afec609d39 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:53:18 force-systemd-flag-604544 crio[836]: time="2025-12-27T20:53:18.672274299Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=0ee6d0cb-1397-4564-8812-ca5b85e17396 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:57:22.509618    5019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:57:22.510391    5019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:57:22.512123    5019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:57:22.512455    5019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:57:22.516690    5019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec27 20:24] overlayfs: idmapped layers are currently not supported
	[Dec27 20:25] overlayfs: idmapped layers are currently not supported
	[ +35.447549] overlayfs: idmapped layers are currently not supported
	[Dec27 20:26] overlayfs: idmapped layers are currently not supported
	[Dec27 20:27] overlayfs: idmapped layers are currently not supported
	[  +6.770645] overlayfs: idmapped layers are currently not supported
	[Dec27 20:28] overlayfs: idmapped layers are currently not supported
	[ +25.872751] overlayfs: idmapped layers are currently not supported
	[Dec27 20:29] overlayfs: idmapped layers are currently not supported
	[ +32.997137] overlayfs: idmapped layers are currently not supported
	[Dec27 20:31] overlayfs: idmapped layers are currently not supported
	[Dec27 20:33] overlayfs: idmapped layers are currently not supported
	[ +33.772475] overlayfs: idmapped layers are currently not supported
	[Dec27 20:39] overlayfs: idmapped layers are currently not supported
	[Dec27 20:40] overlayfs: idmapped layers are currently not supported
	[Dec27 20:44] overlayfs: idmapped layers are currently not supported
	[Dec27 20:45] overlayfs: idmapped layers are currently not supported
	[Dec27 20:49] overlayfs: idmapped layers are currently not supported
	[Dec27 20:50] overlayfs: idmapped layers are currently not supported
	[Dec27 20:51] overlayfs: idmapped layers are currently not supported
	[Dec27 20:52] overlayfs: idmapped layers are currently not supported
	[Dec27 20:53] overlayfs: idmapped layers are currently not supported
	[Dec27 20:55] overlayfs: idmapped layers are currently not supported
	[ +57.272039] overlayfs: idmapped layers are currently not supported
	[Dec27 20:57] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:57:22 up  2:39,  0 user,  load average: 2.75, 1.61, 1.73
	Linux force-systemd-flag-604544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 27 20:57:20 force-systemd-flag-604544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 20:57:20 force-systemd-flag-604544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 648.
	Dec 27 20:57:20 force-systemd-flag-604544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:57:20 force-systemd-flag-604544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:57:21 force-systemd-flag-604544 kubelet[4908]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 20:57:21 force-systemd-flag-604544 kubelet[4908]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 20:57:21 force-systemd-flag-604544 kubelet[4908]: E1227 20:57:21.089160    4908 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 20:57:21 force-systemd-flag-604544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 20:57:21 force-systemd-flag-604544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 20:57:21 force-systemd-flag-604544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 649.
	Dec 27 20:57:21 force-systemd-flag-604544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:57:21 force-systemd-flag-604544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:57:21 force-systemd-flag-604544 kubelet[4937]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 20:57:21 force-systemd-flag-604544 kubelet[4937]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 20:57:21 force-systemd-flag-604544 kubelet[4937]: E1227 20:57:21.804616    4937 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 20:57:21 force-systemd-flag-604544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 20:57:21 force-systemd-flag-604544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 20:57:22 force-systemd-flag-604544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 650.
	Dec 27 20:57:22 force-systemd-flag-604544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:57:22 force-systemd-flag-604544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:57:22 force-systemd-flag-604544 kubelet[5024]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 20:57:22 force-systemd-flag-604544 kubelet[5024]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 20:57:22 force-systemd-flag-604544 kubelet[5024]: E1227 20:57:22.552411    5024 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 20:57:22 force-systemd-flag-604544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 20:57:22 force-systemd-flag-604544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-604544 -n force-systemd-flag-604544
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-604544 -n force-systemd-flag-604544: exit status 6 (423.571312ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 20:57:23.081262  507273 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-604544" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-604544" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-604544" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-604544
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-604544: (3.356288316s)
--- FAIL: TestForceSystemdFlag (508.50s)

                                                
                                    
x
+
TestForceSystemdEnv (505.4s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-859716 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1227 20:42:13.966532  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:42:54.129686  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-859716 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 109 (8m21.251666718s)

                                                
                                                
-- stdout --
	* [force-systemd-env-859716] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-859716" primary control-plane node in "force-systemd-env-859716" cluster
	* Pulling base image v0.0.48-1766570851-22316 ...
	* Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:40:49.960180  452036 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:40:49.960351  452036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:40:49.960384  452036 out.go:374] Setting ErrFile to fd 2...
	I1227 20:40:49.960406  452036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:40:49.960678  452036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:40:49.961175  452036 out.go:368] Setting JSON to false
	I1227 20:40:49.962116  452036 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8602,"bootTime":1766859448,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:40:49.962219  452036 start.go:143] virtualization:  
	I1227 20:40:49.966205  452036 out.go:179] * [force-systemd-env-859716] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:40:49.970748  452036 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:40:49.970889  452036 notify.go:221] Checking for updates...
	I1227 20:40:49.977223  452036 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:40:49.980510  452036 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:40:49.983627  452036 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:40:49.986735  452036 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:40:49.989698  452036 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1227 20:40:49.993238  452036 config.go:182] Loaded profile config "running-upgrade-680512": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1227 20:40:49.993330  452036 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:40:50.025239  452036 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:40:50.025471  452036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:40:50.098790  452036 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:40:50.083154167 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:40:50.098908  452036 docker.go:319] overlay module found
	I1227 20:40:50.102310  452036 out.go:179] * Using the docker driver based on user configuration
	I1227 20:40:50.105211  452036 start.go:309] selected driver: docker
	I1227 20:40:50.105229  452036 start.go:928] validating driver "docker" against <nil>
	I1227 20:40:50.105242  452036 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:40:50.106082  452036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:40:50.160327  452036 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:40:50.15132299 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:40:50.160485  452036 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 20:40:50.160703  452036 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 20:40:50.163664  452036 out.go:179] * Using Docker driver with root privileges
	I1227 20:40:50.166526  452036 cni.go:84] Creating CNI manager for ""
	I1227 20:40:50.166595  452036 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:40:50.166609  452036 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 20:40:50.166680  452036 start.go:353] cluster config:
	{Name:force-systemd-env-859716 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-859716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:40:50.169755  452036 out.go:179] * Starting "force-systemd-env-859716" primary control-plane node in "force-systemd-env-859716" cluster
	I1227 20:40:50.172676  452036 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:40:50.175463  452036 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:40:50.178295  452036 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:40:50.178357  452036 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:40:50.178366  452036 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:40:50.178371  452036 cache.go:65] Caching tarball of preloaded images
	I1227 20:40:50.178511  452036 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:40:50.178523  452036 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:40:50.178635  452036 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/config.json ...
	I1227 20:40:50.178661  452036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/config.json: {Name:mk41797654bbbf1c38a9e8bb4ccc2b160c4b1fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:40:50.197478  452036 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:40:50.197505  452036 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:40:50.197525  452036 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:40:50.197557  452036 start.go:360] acquireMachinesLock for force-systemd-env-859716: {Name:mk5d7339f94cdf3fa32d170f7fc207731903c625 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:40:50.197668  452036 start.go:364] duration metric: took 90.434µs to acquireMachinesLock for "force-systemd-env-859716"
	I1227 20:40:50.197700  452036 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-859716 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-859716 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:40:50.197773  452036 start.go:125] createHost starting for "" (driver="docker")
	I1227 20:40:50.203041  452036 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:40:50.203295  452036 start.go:159] libmachine.API.Create for "force-systemd-env-859716" (driver="docker")
	I1227 20:40:50.203334  452036 client.go:173] LocalClient.Create starting
	I1227 20:40:50.203403  452036 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem
	I1227 20:40:50.203442  452036 main.go:144] libmachine: Decoding PEM data...
	I1227 20:40:50.203466  452036 main.go:144] libmachine: Parsing certificate...
	I1227 20:40:50.203524  452036 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem
	I1227 20:40:50.203547  452036 main.go:144] libmachine: Decoding PEM data...
	I1227 20:40:50.203562  452036 main.go:144] libmachine: Parsing certificate...
	I1227 20:40:50.203951  452036 cli_runner.go:164] Run: docker network inspect force-systemd-env-859716 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:40:50.219978  452036 cli_runner.go:211] docker network inspect force-systemd-env-859716 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:40:50.220060  452036 network_create.go:284] running [docker network inspect force-systemd-env-859716] to gather additional debugging logs...
	I1227 20:40:50.220081  452036 cli_runner.go:164] Run: docker network inspect force-systemd-env-859716
	W1227 20:40:50.236310  452036 cli_runner.go:211] docker network inspect force-systemd-env-859716 returned with exit code 1
	I1227 20:40:50.236348  452036 network_create.go:287] error running [docker network inspect force-systemd-env-859716]: docker network inspect force-systemd-env-859716: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-859716 not found
	I1227 20:40:50.236361  452036 network_create.go:289] output of [docker network inspect force-systemd-env-859716]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-859716 not found
	
	** /stderr **
	I1227 20:40:50.236472  452036 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:40:50.252089  452036 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9521cb9225c5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:1d:ef:38:b7:a6} reservation:<nil>}
	I1227 20:40:50.252505  452036 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-68d11cc2ab47 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:8d:ad:37:cb:fe} reservation:<nil>}
	I1227 20:40:50.252740  452036 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d3b7cfff4895 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:4a:e3:08:10:2f} reservation:<nil>}
	I1227 20:40:50.253185  452036 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019f5340}
	I1227 20:40:50.253214  452036 network_create.go:124] attempt to create docker network force-systemd-env-859716 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 20:40:50.253273  452036 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-859716 force-systemd-env-859716
	I1227 20:40:50.312084  452036 network_create.go:108] docker network force-systemd-env-859716 192.168.76.0/24 created
	I1227 20:40:50.312118  452036 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-859716" container
	I1227 20:40:50.312189  452036 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:40:50.328003  452036 cli_runner.go:164] Run: docker volume create force-systemd-env-859716 --label name.minikube.sigs.k8s.io=force-systemd-env-859716 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:40:50.345261  452036 oci.go:103] Successfully created a docker volume force-systemd-env-859716
	I1227 20:40:50.345348  452036 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-859716-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-859716 --entrypoint /usr/bin/test -v force-systemd-env-859716:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:40:50.870448  452036 oci.go:107] Successfully prepared a docker volume force-systemd-env-859716
	I1227 20:40:50.870533  452036 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:40:50.870549  452036 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 20:40:50.870616  452036 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-859716:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 20:40:55.004993  452036 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-859716:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.134335921s)
	I1227 20:40:55.005023  452036 kic.go:203] duration metric: took 4.134471694s to extract preloaded images to volume ...
	W1227 20:40:55.005164  452036 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 20:40:55.005278  452036 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:40:55.095445  452036 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-859716 --name force-systemd-env-859716 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-859716 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-859716 --network force-systemd-env-859716 --ip 192.168.76.2 --volume force-systemd-env-859716:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:40:55.435901  452036 cli_runner.go:164] Run: docker container inspect force-systemd-env-859716 --format={{.State.Running}}
	I1227 20:40:55.461720  452036 cli_runner.go:164] Run: docker container inspect force-systemd-env-859716 --format={{.State.Status}}
	I1227 20:40:55.495284  452036 cli_runner.go:164] Run: docker exec force-systemd-env-859716 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:40:55.605746  452036 oci.go:144] the created container "force-systemd-env-859716" has a running status.
	I1227 20:40:55.605791  452036 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-env-859716/id_rsa...
	I1227 20:40:55.850718  452036 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-env-859716/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 20:40:55.850795  452036 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-env-859716/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:40:55.874085  452036 cli_runner.go:164] Run: docker container inspect force-systemd-env-859716 --format={{.State.Status}}
	I1227 20:40:55.916971  452036 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:40:55.916992  452036 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-859716 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:40:56.008293  452036 cli_runner.go:164] Run: docker container inspect force-systemd-env-859716 --format={{.State.Status}}
	I1227 20:40:56.029330  452036 machine.go:94] provisionDockerMachine start ...
	I1227 20:40:56.029442  452036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-859716
	I1227 20:40:56.060444  452036 main.go:144] libmachine: Using SSH client type: native
	I1227 20:40:56.060803  452036 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1227 20:40:56.060820  452036 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:40:56.061383  452036 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 20:40:59.201142  452036 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-859716
	
	I1227 20:40:59.201167  452036 ubuntu.go:182] provisioning hostname "force-systemd-env-859716"
	I1227 20:40:59.201243  452036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-859716
	I1227 20:40:59.220127  452036 main.go:144] libmachine: Using SSH client type: native
	I1227 20:40:59.220458  452036 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1227 20:40:59.220474  452036 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-859716 && echo "force-systemd-env-859716" | sudo tee /etc/hostname
	I1227 20:40:59.371488  452036 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-859716
	
	I1227 20:40:59.371559  452036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-859716
	I1227 20:40:59.389029  452036 main.go:144] libmachine: Using SSH client type: native
	I1227 20:40:59.389356  452036 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1227 20:40:59.389372  452036 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-859716' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-859716/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-859716' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:40:59.538003  452036 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:40:59.538041  452036 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:40:59.538084  452036 ubuntu.go:190] setting up certificates
	I1227 20:40:59.538094  452036 provision.go:84] configureAuth start
	I1227 20:40:59.538169  452036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-859716
	I1227 20:40:59.555440  452036 provision.go:143] copyHostCerts
	I1227 20:40:59.555490  452036 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:40:59.555522  452036 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:40:59.555541  452036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:40:59.555621  452036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:40:59.555711  452036 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:40:59.555736  452036 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:40:59.555744  452036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:40:59.555777  452036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:40:59.555836  452036 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:40:59.555858  452036 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:40:59.555869  452036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:40:59.555900  452036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:40:59.555955  452036 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-859716 san=[127.0.0.1 192.168.76.2 force-systemd-env-859716 localhost minikube]
	I1227 20:40:59.862715  452036 provision.go:177] copyRemoteCerts
	I1227 20:40:59.862787  452036 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:40:59.862833  452036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-859716
	I1227 20:40:59.879249  452036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-env-859716/id_rsa Username:docker}
	I1227 20:40:59.977041  452036 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:40:59.977095  452036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:40:59.993471  452036 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:40:59.993545  452036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1227 20:41:00.038506  452036 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:41:00.038587  452036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:41:00.160085  452036 provision.go:87] duration metric: took 621.971281ms to configureAuth
	I1227 20:41:00.160112  452036 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:41:00.160332  452036 config.go:182] Loaded profile config "force-systemd-env-859716": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:41:00.160447  452036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-859716
	I1227 20:41:00.224712  452036 main.go:144] libmachine: Using SSH client type: native
	I1227 20:41:00.225067  452036 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1227 20:41:00.225083  452036 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:41:00.564416  452036 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:41:00.564443  452036 machine.go:97] duration metric: took 4.53508953s to provisionDockerMachine
	I1227 20:41:00.564454  452036 client.go:176] duration metric: took 10.361109301s to LocalClient.Create
	I1227 20:41:00.564467  452036 start.go:167] duration metric: took 10.361174513s to libmachine.API.Create "force-systemd-env-859716"
	I1227 20:41:00.564475  452036 start.go:293] postStartSetup for "force-systemd-env-859716" (driver="docker")
	I1227 20:41:00.564484  452036 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:41:00.564557  452036 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:41:00.564605  452036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-859716
	I1227 20:41:00.581212  452036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-env-859716/id_rsa Username:docker}
	I1227 20:41:00.681661  452036 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:41:00.685023  452036 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:41:00.685053  452036 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:41:00.685065  452036 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:41:00.685117  452036 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:41:00.685201  452036 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:41:00.685217  452036 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:41:00.685313  452036 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:41:00.692826  452036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:41:00.709961  452036 start.go:296] duration metric: took 145.472168ms for postStartSetup
	I1227 20:41:00.710334  452036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-859716
	I1227 20:41:00.726692  452036 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/config.json ...
	I1227 20:41:00.726968  452036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:41:00.727014  452036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-859716
	I1227 20:41:00.743069  452036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-env-859716/id_rsa Username:docker}
	I1227 20:41:00.838325  452036 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:41:00.843724  452036 start.go:128] duration metric: took 10.645935811s to createHost
	I1227 20:41:00.843750  452036 start.go:83] releasing machines lock for "force-systemd-env-859716", held for 10.6460669s
	I1227 20:41:00.843859  452036 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-859716
	I1227 20:41:00.860283  452036 ssh_runner.go:195] Run: cat /version.json
	I1227 20:41:00.860349  452036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-859716
	I1227 20:41:00.860599  452036 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:41:00.860661  452036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-859716
	I1227 20:41:00.881563  452036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-env-859716/id_rsa Username:docker}
	I1227 20:41:00.886285  452036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-env-859716/id_rsa Username:docker}
	I1227 20:41:00.981724  452036 ssh_runner.go:195] Run: systemctl --version
	I1227 20:41:01.072825  452036 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:41:01.107815  452036 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:41:01.112028  452036 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:41:01.112109  452036 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:41:01.142423  452036 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 20:41:01.142454  452036 start.go:496] detecting cgroup driver to use...
	I1227 20:41:01.142473  452036 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 20:41:01.142575  452036 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:41:01.161419  452036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:41:01.176254  452036 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:41:01.176321  452036 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:41:01.195952  452036 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:41:01.216370  452036 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:41:01.338612  452036 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:41:01.455163  452036 docker.go:234] disabling docker service ...
	I1227 20:41:01.455230  452036 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:41:01.477824  452036 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:41:01.491014  452036 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:41:01.616913  452036 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:41:01.754493  452036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:41:01.771501  452036 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:41:01.788439  452036 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:41:01.788575  452036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:41:01.801361  452036 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 20:41:01.801486  452036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:41:01.811069  452036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:41:01.821409  452036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:41:01.833399  452036 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:41:01.841887  452036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:41:01.852427  452036 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:41:01.870164  452036 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:41:01.879433  452036 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:41:01.889172  452036 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:41:01.897911  452036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:41:02.016956  452036 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:41:02.184052  452036 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:41:02.184165  452036 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:41:02.188079  452036 start.go:574] Will wait 60s for crictl version
	I1227 20:41:02.188146  452036 ssh_runner.go:195] Run: which crictl
	I1227 20:41:02.191773  452036 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:41:02.218286  452036 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:41:02.218371  452036 ssh_runner.go:195] Run: crio --version
	I1227 20:41:02.249391  452036 ssh_runner.go:195] Run: crio --version
	I1227 20:41:02.286515  452036 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:41:02.289484  452036 cli_runner.go:164] Run: docker network inspect force-systemd-env-859716 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:41:02.306348  452036 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 20:41:02.310313  452036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:41:02.320056  452036 kubeadm.go:884] updating cluster {Name:force-systemd-env-859716 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-859716 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:41:02.320172  452036 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:41:02.320279  452036 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:41:02.357790  452036 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:41:02.357819  452036 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:41:02.357874  452036 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:41:02.389121  452036 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:41:02.389146  452036 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:41:02.389155  452036 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 20:41:02.389255  452036 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-859716 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-859716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:41:02.389340  452036 ssh_runner.go:195] Run: crio config
	I1227 20:41:02.457117  452036 cni.go:84] Creating CNI manager for ""
	I1227 20:41:02.457142  452036 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:41:02.457161  452036 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:41:02.457192  452036 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-859716 NodeName:force-systemd-env-859716 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:41:02.457314  452036 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-859716"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:41:02.457386  452036 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:41:02.464891  452036 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:41:02.464971  452036 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:41:02.472546  452036 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1227 20:41:02.485228  452036 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:41:02.497991  452036 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1227 20:41:02.510709  452036 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:41:02.514528  452036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:41:02.524142  452036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:41:02.644163  452036 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:41:02.660926  452036 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716 for IP: 192.168.76.2
	I1227 20:41:02.660949  452036 certs.go:195] generating shared ca certs ...
	I1227 20:41:02.660976  452036 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:41:02.661112  452036 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:41:02.661166  452036 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:41:02.661176  452036 certs.go:257] generating profile certs ...
	I1227 20:41:02.661231  452036 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/client.key
	I1227 20:41:02.661255  452036 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/client.crt with IP's: []
	I1227 20:41:02.869585  452036 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/client.crt ...
	I1227 20:41:02.869617  452036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/client.crt: {Name:mk126bc016e22d84819e7ebcbaee20707e38ebc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:41:02.869825  452036 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/client.key ...
	I1227 20:41:02.869841  452036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/client.key: {Name:mk3e5021229a8194b571b6a06a7eea4cde432995 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:41:02.869938  452036 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/apiserver.key.c308272a
	I1227 20:41:02.869955  452036 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/apiserver.crt.c308272a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 20:41:03.058948  452036 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/apiserver.crt.c308272a ...
	I1227 20:41:03.058982  452036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/apiserver.crt.c308272a: {Name:mkd02213d54a4e279c498e862800df2948cd271e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:41:03.059170  452036 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/apiserver.key.c308272a ...
	I1227 20:41:03.059186  452036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/apiserver.key.c308272a: {Name:mk177c890fce61507c37c2e63f2ecf71fa09a53f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:41:03.059274  452036 certs.go:382] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/apiserver.crt.c308272a -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/apiserver.crt
	I1227 20:41:03.059360  452036 certs.go:386] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/apiserver.key.c308272a -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/apiserver.key
	I1227 20:41:03.059421  452036 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/proxy-client.key
	I1227 20:41:03.059443  452036 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/proxy-client.crt with IP's: []
	I1227 20:41:03.242064  452036 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/proxy-client.crt ...
	I1227 20:41:03.242102  452036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/proxy-client.crt: {Name:mkf458d9b25062186913e4c233e04bd42b276a86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:41:03.242349  452036 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/proxy-client.key ...
	I1227 20:41:03.242368  452036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/proxy-client.key: {Name:mkde052b56fb18152f5cbca66d41fe3971caa56e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:41:03.242469  452036 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:41:03.242492  452036 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:41:03.242511  452036 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:41:03.242529  452036 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:41:03.242541  452036 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:41:03.242557  452036 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:41:03.242572  452036 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:41:03.242585  452036 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:41:03.242637  452036 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:41:03.242686  452036 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:41:03.242698  452036 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:41:03.242729  452036 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:41:03.242759  452036 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:41:03.242788  452036 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:41:03.242835  452036 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:41:03.242868  452036 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:41:03.242885  452036 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:41:03.242896  452036 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:41:03.243477  452036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:41:03.262143  452036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:41:03.281096  452036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:41:03.299629  452036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:41:03.317188  452036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 20:41:03.335271  452036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:41:03.352705  452036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:41:03.371271  452036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-env-859716/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 20:41:03.388432  452036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:41:03.406225  452036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:41:03.424773  452036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:41:03.442657  452036 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:41:03.455066  452036 ssh_runner.go:195] Run: openssl version
	I1227 20:41:03.461511  452036 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:41:03.469570  452036 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:41:03.477845  452036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:41:03.482402  452036 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:41:03.482520  452036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:41:03.527853  452036 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:41:03.535591  452036 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2743362.pem /etc/ssl/certs/3ec20f2e.0
	I1227 20:41:03.542695  452036 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:41:03.549832  452036 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:41:03.557188  452036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:41:03.560927  452036 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:41:03.561033  452036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:41:03.602059  452036 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:41:03.609680  452036 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 20:41:03.616702  452036 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:41:03.625027  452036 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:41:03.632669  452036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:41:03.636443  452036 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:41:03.636563  452036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:41:03.678429  452036 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:41:03.687270  452036 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/274336.pem /etc/ssl/certs/51391683.0
	I1227 20:41:03.694691  452036 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:41:03.698318  452036 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:41:03.698372  452036 kubeadm.go:401] StartCluster: {Name:force-systemd-env-859716 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-859716 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:41:03.698451  452036 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:41:03.698515  452036 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:41:03.725244  452036 cri.go:96] found id: ""
	I1227 20:41:03.725322  452036 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:41:03.733233  452036 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 20:41:03.741379  452036 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:41:03.741536  452036 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:41:03.749305  452036 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:41:03.749376  452036 kubeadm.go:158] found existing configuration files:
	
	I1227 20:41:03.749439  452036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:41:03.756839  452036 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:41:03.756904  452036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:41:03.763864  452036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:41:03.771581  452036 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:41:03.771650  452036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:41:03.778331  452036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:41:03.785683  452036 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:41:03.785753  452036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:41:03.792737  452036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:41:03.800399  452036 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:41:03.800476  452036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:41:03.808632  452036 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:41:03.848277  452036 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:41:03.848335  452036 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:41:03.930840  452036 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:41:03.930918  452036 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 20:41:03.930958  452036 kubeadm.go:319] OS: Linux
	I1227 20:41:03.931008  452036 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:41:03.931061  452036 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 20:41:03.931113  452036 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:41:03.931165  452036 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:41:03.931215  452036 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:41:03.931268  452036 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:41:03.931318  452036 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:41:03.931370  452036 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:41:03.931421  452036 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 20:41:03.994105  452036 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:41:03.994284  452036 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:41:03.994422  452036 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:41:04.003270  452036 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 20:41:04.010211  452036 out.go:252]   - Generating certificates and keys ...
	I1227 20:41:04.010330  452036 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:41:04.010403  452036 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:41:04.294580  452036 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 20:41:04.520836  452036 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 20:41:04.803524  452036 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 20:41:04.920681  452036 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 20:41:05.048940  452036 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 20:41:05.049175  452036 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-859716 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 20:41:05.135701  452036 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 20:41:05.136006  452036 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-859716 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 20:41:06.368599  452036 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 20:41:06.575535  452036 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 20:41:06.783233  452036 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 20:41:06.783554  452036 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:41:07.065734  452036 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:41:07.391615  452036 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:41:07.539271  452036 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:41:07.700878  452036 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:41:07.883132  452036 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:41:07.884037  452036 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:41:07.886823  452036 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:41:07.890333  452036 out.go:252]   - Booting up control plane ...
	I1227 20:41:07.890441  452036 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:41:07.890521  452036 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:41:07.890599  452036 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:41:07.916239  452036 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:41:07.916392  452036 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:41:07.924581  452036 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:41:07.926679  452036 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:41:07.926734  452036 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:41:08.058744  452036 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:41:08.058887  452036 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 20:45:08.061971  452036 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000875242s
	I1227 20:45:08.062001  452036 kubeadm.go:319] 
	I1227 20:45:08.062059  452036 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 20:45:08.062093  452036 kubeadm.go:319] 	- The kubelet is not running
	I1227 20:45:08.062198  452036 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 20:45:08.062204  452036 kubeadm.go:319] 
	I1227 20:45:08.063173  452036 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 20:45:08.063261  452036 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 20:45:08.063317  452036 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 20:45:08.063323  452036 kubeadm.go:319] 
	I1227 20:45:08.064235  452036 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 20:45:08.065147  452036 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:45:08.065346  452036 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:45:08.067194  452036 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1227 20:45:08.067221  452036 kubeadm.go:319] 
	I1227 20:45:08.067346  452036 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1227 20:45:08.067486  452036 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-859716 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-859716 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000875242s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-859716 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-859716 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000875242s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 20:45:08.067566  452036 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1227 20:45:08.530173  452036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:45:08.542901  452036 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:45:08.542965  452036 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:45:08.551782  452036 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:45:08.551801  452036 kubeadm.go:158] found existing configuration files:
	
	I1227 20:45:08.551851  452036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:45:08.559556  452036 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:45:08.559615  452036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:45:08.568009  452036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:45:08.575813  452036 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:45:08.575872  452036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:45:08.583046  452036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:45:08.590410  452036 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:45:08.590474  452036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:45:08.597556  452036 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:45:08.604938  452036 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:45:08.605003  452036 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:45:08.612396  452036 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:45:08.658235  452036 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:45:08.658300  452036 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:45:08.748038  452036 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:45:08.748111  452036 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 20:45:08.748146  452036 kubeadm.go:319] OS: Linux
	I1227 20:45:08.748192  452036 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:45:08.748241  452036 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 20:45:08.748288  452036 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:45:08.748336  452036 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:45:08.748384  452036 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:45:08.748437  452036 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:45:08.748483  452036 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:45:08.748533  452036 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:45:08.748579  452036 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 20:45:08.837158  452036 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:45:08.837264  452036 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:45:08.837350  452036 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:45:08.849023  452036 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 20:45:08.852099  452036 out.go:252]   - Generating certificates and keys ...
	I1227 20:45:08.852180  452036 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:45:08.852242  452036 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:45:08.852469  452036 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 20:45:08.852531  452036 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 20:45:08.852607  452036 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 20:45:08.852763  452036 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 20:45:08.852847  452036 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 20:45:08.853129  452036 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 20:45:08.853489  452036 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 20:45:08.853834  452036 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 20:45:08.854148  452036 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 20:45:08.854208  452036 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:45:08.895231  452036 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:45:09.508378  452036 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:45:09.789406  452036 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:45:10.273831  452036 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:45:10.363350  452036 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:45:10.364130  452036 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:45:10.366890  452036 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:45:10.370107  452036 out.go:252]   - Booting up control plane ...
	I1227 20:45:10.370207  452036 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:45:10.370285  452036 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:45:10.370352  452036 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:45:10.393553  452036 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:45:10.393664  452036 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:45:10.402600  452036 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:45:10.402700  452036 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:45:10.402740  452036 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:45:10.555733  452036 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:45:10.555856  452036 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 20:49:10.555666  452036 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000308235s
	I1227 20:49:10.558472  452036 kubeadm.go:319] 
	I1227 20:49:10.558610  452036 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 20:49:10.558673  452036 kubeadm.go:319] 	- The kubelet is not running
	I1227 20:49:10.558862  452036 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 20:49:10.558870  452036 kubeadm.go:319] 
	I1227 20:49:10.559299  452036 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 20:49:10.559363  452036 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 20:49:10.559419  452036 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 20:49:10.559424  452036 kubeadm.go:319] 
	I1227 20:49:10.561921  452036 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 20:49:10.562813  452036 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:49:10.563459  452036 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:49:10.563722  452036 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 20:49:10.563728  452036 kubeadm.go:319] 
	I1227 20:49:10.563864  452036 kubeadm.go:403] duration metric: took 8m6.865497444s to StartCluster
	I1227 20:49:10.563898  452036 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:49:10.563957  452036 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:49:10.564123  452036 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 20:49:10.609231  452036 cri.go:96] found id: ""
	I1227 20:49:10.609276  452036 logs.go:282] 0 containers: []
	W1227 20:49:10.609286  452036 logs.go:284] No container was found matching "kube-apiserver"
	I1227 20:49:10.609309  452036 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:49:10.609383  452036 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:49:10.648631  452036 cri.go:96] found id: ""
	I1227 20:49:10.648671  452036 logs.go:282] 0 containers: []
	W1227 20:49:10.648684  452036 logs.go:284] No container was found matching "etcd"
	I1227 20:49:10.648707  452036 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:49:10.648798  452036 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:49:10.681856  452036 cri.go:96] found id: ""
	I1227 20:49:10.681893  452036 logs.go:282] 0 containers: []
	W1227 20:49:10.681902  452036 logs.go:284] No container was found matching "coredns"
	I1227 20:49:10.681927  452036 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:49:10.682008  452036 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:49:10.714883  452036 cri.go:96] found id: ""
	I1227 20:49:10.714921  452036 logs.go:282] 0 containers: []
	W1227 20:49:10.714932  452036 logs.go:284] No container was found matching "kube-scheduler"
	I1227 20:49:10.714939  452036 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:49:10.715052  452036 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:49:10.781708  452036 cri.go:96] found id: ""
	I1227 20:49:10.781734  452036 logs.go:282] 0 containers: []
	W1227 20:49:10.781743  452036 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:49:10.781749  452036 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:49:10.781862  452036 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:49:10.828251  452036 cri.go:96] found id: ""
	I1227 20:49:10.828279  452036 logs.go:282] 0 containers: []
	W1227 20:49:10.828289  452036 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 20:49:10.828303  452036 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:49:10.828401  452036 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:49:10.855682  452036 cri.go:96] found id: ""
	I1227 20:49:10.855711  452036 logs.go:282] 0 containers: []
	W1227 20:49:10.855720  452036 logs.go:284] No container was found matching "kindnet"
	I1227 20:49:10.855731  452036 logs.go:123] Gathering logs for dmesg ...
	I1227 20:49:10.855768  452036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:49:10.876376  452036 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:49:10.876411  452036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:49:10.998270  452036 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:49:10.989903    4910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:49:10.990874    4910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:49:10.992447    4910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:49:10.992784    4910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:49:10.994267    4910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:49:10.989903    4910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:49:10.990874    4910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:49:10.992447    4910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:49:10.992784    4910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:49:10.994267    4910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:49:10.998291  452036 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:49:10.998304  452036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:49:11.037985  452036 logs.go:123] Gathering logs for container status ...
	I1227 20:49:11.038020  452036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:49:11.067998  452036 logs.go:123] Gathering logs for kubelet ...
	I1227 20:49:11.068027  452036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1227 20:49:11.137270  452036 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308235s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 20:49:11.137335  452036 out.go:285] * 
	* 
	W1227 20:49:11.137409  452036 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308235s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308235s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 20:49:11.137465  452036 out.go:285] * 
	* 
	W1227 20:49:11.137751  452036 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:49:11.146674  452036 out.go:203] 
	W1227 20:49:11.150219  452036 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308235s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308235s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 20:49:11.150286  452036 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 20:49:11.150309  452036 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 20:49:11.153427  452036 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-859716 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 109
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-12-27 20:49:11.260329578 +0000 UTC m=+3228.765111373
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-859716
helpers_test.go:244: (dbg) docker inspect force-systemd-env-859716:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "50e9f8b01f24fb0be22d4eb0da70f61cc38bfd95f7986ecf09948a7aef3111f3",
	        "Created": "2025-12-27T20:40:55.116315545Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 452565,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:40:55.200294257Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/50e9f8b01f24fb0be22d4eb0da70f61cc38bfd95f7986ecf09948a7aef3111f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/50e9f8b01f24fb0be22d4eb0da70f61cc38bfd95f7986ecf09948a7aef3111f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/50e9f8b01f24fb0be22d4eb0da70f61cc38bfd95f7986ecf09948a7aef3111f3/hosts",
	        "LogPath": "/var/lib/docker/containers/50e9f8b01f24fb0be22d4eb0da70f61cc38bfd95f7986ecf09948a7aef3111f3/50e9f8b01f24fb0be22d4eb0da70f61cc38bfd95f7986ecf09948a7aef3111f3-json.log",
	        "Name": "/force-systemd-env-859716",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-859716:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-859716",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "50e9f8b01f24fb0be22d4eb0da70f61cc38bfd95f7986ecf09948a7aef3111f3",
	                "LowerDir": "/var/lib/docker/overlay2/15c2972ce5a121cfb569241ef95f80834bc4ebb6a2f96fe8008d1211b764c61d-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/15c2972ce5a121cfb569241ef95f80834bc4ebb6a2f96fe8008d1211b764c61d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/15c2972ce5a121cfb569241ef95f80834bc4ebb6a2f96fe8008d1211b764c61d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/15c2972ce5a121cfb569241ef95f80834bc4ebb6a2f96fe8008d1211b764c61d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-859716",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-859716/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-859716",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-859716",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-859716",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8aee7d7a81e326f1832fb3e3401e4367d1763f587a729910f691959b5edf7b0c",
	            "SandboxKey": "/var/run/docker/netns/8aee7d7a81e3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33373"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33374"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33377"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33375"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33376"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-859716": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:86:a5:0e:52:a7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "58ce77e21f34ef62e928b07bf14c47cc93fc88d1d2bb22fe78d3c3a07f32a79d",
	                    "EndpointID": "065a61ce996f6d4705eb95848d298e7048e16fee3ed6ed7fbfcd1e3a3a67fb1a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-859716",
	                        "50e9f8b01f24"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-859716 -n force-systemd-env-859716
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-859716 -n force-systemd-env-859716: exit status 6 (507.263819ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 20:49:11.794901  476736 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-859716" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-859716 logs -n 25
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-037975 sudo cat /etc/kubernetes/kubelet.conf                                                                      │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo cat /var/lib/kubelet/config.yaml                                                                      │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo systemctl status docker --all --full --no-pager                                                       │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo systemctl cat docker --no-pager                                                                       │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo cat /etc/docker/daemon.json                                                                           │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo docker system info                                                                                    │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo systemctl status cri-docker --all --full --no-pager                                                   │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo cri-dockerd --version                                                                                 │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo systemctl cat containerd --no-pager                                                                   │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo cat /etc/containerd/config.toml                                                                       │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo containerd config dump                                                                                │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo systemctl cat crio --no-pager                                                                         │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo crio config                                                                                           │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ delete  │ -p cilium-037975                                                                                                            │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │ 27 Dec 25 20:45 UTC │
	│ start   │ -p cert-expiration-629954 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-629954    │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │ 27 Dec 25 20:45 UTC │
	│ start   │ -p cert-expiration-629954 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                   │ cert-expiration-629954    │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │ 27 Dec 25 20:48 UTC │
	│ delete  │ -p cert-expiration-629954                                                                                                   │ cert-expiration-629954    │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │ 27 Dec 25 20:48 UTC │
	│ start   │ -p force-systemd-flag-604544 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-604544 │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:48:58
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:48:58.022662  474910 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:48:58.022826  474910 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:48:58.022837  474910 out.go:374] Setting ErrFile to fd 2...
	I1227 20:48:58.022844  474910 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:48:58.023323  474910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:48:58.027649  474910 out.go:368] Setting JSON to false
	I1227 20:48:58.028816  474910 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9090,"bootTime":1766859448,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:48:58.028911  474910 start.go:143] virtualization:  
	I1227 20:48:58.032577  474910 out.go:179] * [force-systemd-flag-604544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:48:58.037174  474910 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:48:58.037290  474910 notify.go:221] Checking for updates...
	I1227 20:48:58.043960  474910 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:48:58.047178  474910 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:48:58.050413  474910 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:48:58.053650  474910 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:48:58.056754  474910 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:48:58.060387  474910 config.go:182] Loaded profile config "force-systemd-env-859716": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:48:58.060506  474910 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:48:58.090572  474910 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:48:58.090696  474910 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:48:58.150506  474910 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:48:58.141930476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:48:58.150610  474910 docker.go:319] overlay module found
	I1227 20:48:58.153805  474910 out.go:179] * Using the docker driver based on user configuration
	I1227 20:48:58.156787  474910 start.go:309] selected driver: docker
	I1227 20:48:58.156805  474910 start.go:928] validating driver "docker" against <nil>
	I1227 20:48:58.156819  474910 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:48:58.157586  474910 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:48:58.209159  474910 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:48:58.200108623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:48:58.209297  474910 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 20:48:58.209537  474910 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 20:48:58.212623  474910 out.go:179] * Using Docker driver with root privileges
	I1227 20:48:58.215607  474910 cni.go:84] Creating CNI manager for ""
	I1227 20:48:58.215667  474910 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:48:58.215685  474910 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 20:48:58.215753  474910 start.go:353] cluster config:
	{Name:force-systemd-flag-604544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-604544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:48:58.218830  474910 out.go:179] * Starting "force-systemd-flag-604544" primary control-plane node in "force-systemd-flag-604544" cluster
	I1227 20:48:58.221703  474910 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:48:58.224591  474910 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:48:58.227405  474910 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:48:58.227455  474910 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:48:58.227468  474910 cache.go:65] Caching tarball of preloaded images
	I1227 20:48:58.227500  474910 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:48:58.227550  474910 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:48:58.227561  474910 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:48:58.227666  474910 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/config.json ...
	I1227 20:48:58.227682  474910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/config.json: {Name:mk9ddeff611679779470328b0716153b904d87e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:48:58.246023  474910 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:48:58.246051  474910 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:48:58.246065  474910 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:48:58.246093  474910 start.go:360] acquireMachinesLock for force-systemd-flag-604544: {Name:mk858d2836eca811f8888fdbe3932081e00f5ad7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:48:58.246203  474910 start.go:364] duration metric: took 89.917µs to acquireMachinesLock for "force-systemd-flag-604544"
	I1227 20:48:58.246242  474910 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-604544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-604544 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:48:58.246307  474910 start.go:125] createHost starting for "" (driver="docker")
	I1227 20:48:58.249698  474910 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:48:58.249922  474910 start.go:159] libmachine.API.Create for "force-systemd-flag-604544" (driver="docker")
	I1227 20:48:58.249955  474910 client.go:173] LocalClient.Create starting
	I1227 20:48:58.250041  474910 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem
	I1227 20:48:58.250080  474910 main.go:144] libmachine: Decoding PEM data...
	I1227 20:48:58.250100  474910 main.go:144] libmachine: Parsing certificate...
	I1227 20:48:58.250156  474910 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem
	I1227 20:48:58.250183  474910 main.go:144] libmachine: Decoding PEM data...
	I1227 20:48:58.250197  474910 main.go:144] libmachine: Parsing certificate...
	I1227 20:48:58.250606  474910 cli_runner.go:164] Run: docker network inspect force-systemd-flag-604544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:48:58.265737  474910 cli_runner.go:211] docker network inspect force-systemd-flag-604544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:48:58.265813  474910 network_create.go:284] running [docker network inspect force-systemd-flag-604544] to gather additional debugging logs...
	I1227 20:48:58.265833  474910 cli_runner.go:164] Run: docker network inspect force-systemd-flag-604544
	W1227 20:48:58.281272  474910 cli_runner.go:211] docker network inspect force-systemd-flag-604544 returned with exit code 1
	I1227 20:48:58.281303  474910 network_create.go:287] error running [docker network inspect force-systemd-flag-604544]: docker network inspect force-systemd-flag-604544: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-604544 not found
	I1227 20:48:58.281317  474910 network_create.go:289] output of [docker network inspect force-systemd-flag-604544]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-604544 not found
	
	** /stderr **
	I1227 20:48:58.281421  474910 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:48:58.297571  474910 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9521cb9225c5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:1d:ef:38:b7:a6} reservation:<nil>}
	I1227 20:48:58.297942  474910 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-68d11cc2ab47 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:8d:ad:37:cb:fe} reservation:<nil>}
	I1227 20:48:58.298209  474910 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d3b7cfff4895 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:4a:e3:08:10:2f} reservation:<nil>}
	I1227 20:48:58.298488  474910 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-58ce77e21f34 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b2:3c:b3:af:25:63} reservation:<nil>}
	I1227 20:48:58.298906  474910 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400195df90}
	I1227 20:48:58.298935  474910 network_create.go:124] attempt to create docker network force-systemd-flag-604544 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 20:48:58.298993  474910 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-604544 force-systemd-flag-604544
	I1227 20:48:58.368476  474910 network_create.go:108] docker network force-systemd-flag-604544 192.168.85.0/24 created
	I1227 20:48:58.368511  474910 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-604544" container
	I1227 20:48:58.368597  474910 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:48:58.383925  474910 cli_runner.go:164] Run: docker volume create force-systemd-flag-604544 --label name.minikube.sigs.k8s.io=force-systemd-flag-604544 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:48:58.401068  474910 oci.go:103] Successfully created a docker volume force-systemd-flag-604544
	I1227 20:48:58.401171  474910 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-604544-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-604544 --entrypoint /usr/bin/test -v force-systemd-flag-604544:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:48:58.940729  474910 oci.go:107] Successfully prepared a docker volume force-systemd-flag-604544
	I1227 20:48:58.940793  474910 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:48:58.940803  474910 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 20:48:58.940889  474910 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-604544:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 20:49:03.121551  474910 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-604544:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.180622352s)
	I1227 20:49:03.121585  474910 kic.go:203] duration metric: took 4.180778663s to extract preloaded images to volume ...
	W1227 20:49:03.121727  474910 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 20:49:03.121837  474910 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:49:03.185600  474910 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-604544 --name force-systemd-flag-604544 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-604544 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-604544 --network force-systemd-flag-604544 --ip 192.168.85.2 --volume force-systemd-flag-604544:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:49:03.516910  474910 cli_runner.go:164] Run: docker container inspect force-systemd-flag-604544 --format={{.State.Running}}
	I1227 20:49:03.540710  474910 cli_runner.go:164] Run: docker container inspect force-systemd-flag-604544 --format={{.State.Status}}
	I1227 20:49:03.560817  474910 cli_runner.go:164] Run: docker exec force-systemd-flag-604544 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:49:03.612063  474910 oci.go:144] the created container "force-systemd-flag-604544" has a running status.
	I1227 20:49:03.612090  474910 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-flag-604544/id_rsa...
	I1227 20:49:03.827829  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-flag-604544/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 20:49:03.827933  474910 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-flag-604544/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:49:03.851182  474910 cli_runner.go:164] Run: docker container inspect force-systemd-flag-604544 --format={{.State.Status}}
	I1227 20:49:03.875121  474910 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:49:03.875140  474910 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-604544 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:49:03.954889  474910 cli_runner.go:164] Run: docker container inspect force-systemd-flag-604544 --format={{.State.Status}}
	I1227 20:49:03.975939  474910 machine.go:94] provisionDockerMachine start ...
	I1227 20:49:03.976121  474910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-604544
	I1227 20:49:04.008338  474910 main.go:144] libmachine: Using SSH client type: native
	I1227 20:49:04.008719  474910 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1227 20:49:04.008809  474910 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:49:04.009686  474910 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45156->127.0.0.1:33398: read: connection reset by peer
	I1227 20:49:07.148948  474910 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-604544
	
	I1227 20:49:07.148974  474910 ubuntu.go:182] provisioning hostname "force-systemd-flag-604544"
	I1227 20:49:07.149045  474910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-604544
	I1227 20:49:07.166303  474910 main.go:144] libmachine: Using SSH client type: native
	I1227 20:49:07.166622  474910 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1227 20:49:07.166640  474910 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-604544 && echo "force-systemd-flag-604544" | sudo tee /etc/hostname
	I1227 20:49:07.311910  474910 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-604544
	
	I1227 20:49:07.312019  474910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-604544
	I1227 20:49:07.329329  474910 main.go:144] libmachine: Using SSH client type: native
	I1227 20:49:07.329857  474910 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1227 20:49:07.329886  474910 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-604544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-604544/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-604544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:49:07.465721  474910 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:49:07.465750  474910 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:49:07.465771  474910 ubuntu.go:190] setting up certificates
	I1227 20:49:07.465821  474910 provision.go:84] configureAuth start
	I1227 20:49:07.465897  474910 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-604544
	I1227 20:49:07.484861  474910 provision.go:143] copyHostCerts
	I1227 20:49:07.484917  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:49:07.484955  474910 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:49:07.484967  474910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:49:07.485044  474910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:49:07.485156  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:49:07.485180  474910 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:49:07.485191  474910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:49:07.485224  474910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:49:07.485278  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:49:07.485296  474910 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:49:07.485305  474910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:49:07.485330  474910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:49:07.485380  474910 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-604544 san=[127.0.0.1 192.168.85.2 force-systemd-flag-604544 localhost minikube]
	I1227 20:49:08.210825  474910 provision.go:177] copyRemoteCerts
	I1227 20:49:08.210891  474910 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:49:08.210931  474910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-604544
	I1227 20:49:08.227949  474910 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-flag-604544/id_rsa Username:docker}
	I1227 20:49:08.325095  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:49:08.325156  474910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:49:08.342545  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:49:08.342619  474910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:49:08.360228  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:49:08.360290  474910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 20:49:08.377376  474910 provision.go:87] duration metric: took 911.524856ms to configureAuth
	I1227 20:49:08.377415  474910 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:49:08.377621  474910 config.go:182] Loaded profile config "force-systemd-flag-604544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:49:08.377740  474910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-604544
	I1227 20:49:08.394742  474910 main.go:144] libmachine: Using SSH client type: native
	I1227 20:49:08.395076  474910 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1227 20:49:08.395097  474910 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:49:08.688359  474910 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:49:08.688382  474910 machine.go:97] duration metric: took 4.712423977s to provisionDockerMachine
	I1227 20:49:08.688408  474910 client.go:176] duration metric: took 10.438425367s to LocalClient.Create
	I1227 20:49:08.688422  474910 start.go:167] duration metric: took 10.438501246s to libmachine.API.Create "force-systemd-flag-604544"
	I1227 20:49:08.688429  474910 start.go:293] postStartSetup for "force-systemd-flag-604544" (driver="docker")
	I1227 20:49:08.688439  474910 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:49:08.688498  474910 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:49:08.688538  474910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-604544
	I1227 20:49:08.706419  474910 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-flag-604544/id_rsa Username:docker}
	I1227 20:49:08.806544  474910 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:49:08.810249  474910 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:49:08.810279  474910 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:49:08.810291  474910 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:49:08.810344  474910 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:49:08.810439  474910 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:49:08.810450  474910 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:49:08.810547  474910 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:49:08.818843  474910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:49:08.837555  474910 start.go:296] duration metric: took 149.110226ms for postStartSetup
	I1227 20:49:08.837944  474910 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-604544
	I1227 20:49:08.856382  474910 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/force-systemd-flag-604544/config.json ...
	I1227 20:49:08.856668  474910 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:49:08.856893  474910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-604544
	I1227 20:49:08.875511  474910 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-flag-604544/id_rsa Username:docker}
	I1227 20:49:08.971069  474910 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:49:08.976007  474910 start.go:128] duration metric: took 10.729684535s to createHost
	I1227 20:49:08.976037  474910 start.go:83] releasing machines lock for "force-systemd-flag-604544", held for 10.729817988s
	I1227 20:49:08.976111  474910 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-604544
	I1227 20:49:08.992776  474910 ssh_runner.go:195] Run: cat /version.json
	I1227 20:49:08.992792  474910 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:49:08.992829  474910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-604544
	I1227 20:49:08.992854  474910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-604544
	I1227 20:49:09.015032  474910 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-flag-604544/id_rsa Username:docker}
	I1227 20:49:09.031160  474910 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/force-systemd-flag-604544/id_rsa Username:docker}
	I1227 20:49:09.208999  474910 ssh_runner.go:195] Run: systemctl --version
	I1227 20:49:09.216221  474910 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:49:09.263226  474910 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:49:09.268926  474910 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:49:09.269034  474910 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:49:09.298703  474910 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 20:49:09.298737  474910 start.go:496] detecting cgroup driver to use...
	I1227 20:49:09.298752  474910 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 20:49:09.298814  474910 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:49:09.315989  474910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:49:09.328518  474910 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:49:09.328581  474910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:49:09.346762  474910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:49:09.365413  474910 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:49:09.491738  474910 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:49:09.618150  474910 docker.go:234] disabling docker service ...
	I1227 20:49:09.618231  474910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:49:09.639553  474910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:49:09.653330  474910 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:49:09.778390  474910 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:49:09.889294  474910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:49:09.901618  474910 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:49:09.915792  474910 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:49:09.915908  474910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:49:09.925226  474910 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 20:49:09.925379  474910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:49:09.935164  474910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:49:09.943974  474910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:49:09.953748  474910 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:49:09.966112  474910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:49:09.979589  474910 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:49:10.000678  474910 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:49:10.012258  474910 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:49:10.022355  474910 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:49:10.031295  474910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:49:10.154813  474910 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:49:10.338602  474910 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:49:10.338676  474910 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:49:10.342897  474910 start.go:574] Will wait 60s for crictl version
	I1227 20:49:10.343002  474910 ssh_runner.go:195] Run: which crictl
	I1227 20:49:10.346693  474910 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:49:10.372684  474910 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:49:10.372778  474910 ssh_runner.go:195] Run: crio --version
	I1227 20:49:10.399215  474910 ssh_runner.go:195] Run: crio --version
	I1227 20:49:10.435295  474910 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:49:10.555666  452036 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000308235s
	I1227 20:49:10.558472  452036 kubeadm.go:319] 
	I1227 20:49:10.558610  452036 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 20:49:10.558673  452036 kubeadm.go:319] 	- The kubelet is not running
	I1227 20:49:10.558862  452036 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 20:49:10.558870  452036 kubeadm.go:319] 
	I1227 20:49:10.559299  452036 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 20:49:10.559363  452036 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 20:49:10.559419  452036 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 20:49:10.559424  452036 kubeadm.go:319] 
	I1227 20:49:10.561921  452036 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 20:49:10.562813  452036 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:49:10.563459  452036 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:49:10.563722  452036 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 20:49:10.563728  452036 kubeadm.go:319] 
	I1227 20:49:10.563864  452036 kubeadm.go:403] duration metric: took 8m6.865497444s to StartCluster
	I1227 20:49:10.563898  452036 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:49:10.563957  452036 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:49:10.564123  452036 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 20:49:10.609231  452036 cri.go:96] found id: ""
	I1227 20:49:10.609276  452036 logs.go:282] 0 containers: []
	W1227 20:49:10.609286  452036 logs.go:284] No container was found matching "kube-apiserver"
	I1227 20:49:10.609309  452036 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:49:10.609383  452036 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:49:10.648631  452036 cri.go:96] found id: ""
	I1227 20:49:10.648671  452036 logs.go:282] 0 containers: []
	W1227 20:49:10.648684  452036 logs.go:284] No container was found matching "etcd"
	I1227 20:49:10.648707  452036 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:49:10.648798  452036 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:49:10.681856  452036 cri.go:96] found id: ""
	I1227 20:49:10.681893  452036 logs.go:282] 0 containers: []
	W1227 20:49:10.681902  452036 logs.go:284] No container was found matching "coredns"
	I1227 20:49:10.681927  452036 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:49:10.682008  452036 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:49:10.714883  452036 cri.go:96] found id: ""
	I1227 20:49:10.714921  452036 logs.go:282] 0 containers: []
	W1227 20:49:10.714932  452036 logs.go:284] No container was found matching "kube-scheduler"
	I1227 20:49:10.714939  452036 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:49:10.715052  452036 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:49:10.781708  452036 cri.go:96] found id: ""
	I1227 20:49:10.781734  452036 logs.go:282] 0 containers: []
	W1227 20:49:10.781743  452036 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:49:10.781749  452036 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:49:10.781862  452036 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:49:10.828251  452036 cri.go:96] found id: ""
	I1227 20:49:10.828279  452036 logs.go:282] 0 containers: []
	W1227 20:49:10.828289  452036 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 20:49:10.828303  452036 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:49:10.828401  452036 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:49:10.855682  452036 cri.go:96] found id: ""
	I1227 20:49:10.855711  452036 logs.go:282] 0 containers: []
	W1227 20:49:10.855720  452036 logs.go:284] No container was found matching "kindnet"
	I1227 20:49:10.855731  452036 logs.go:123] Gathering logs for dmesg ...
	I1227 20:49:10.855768  452036 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:49:10.876376  452036 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:49:10.876411  452036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:49:10.998270  452036 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:49:10.989903    4910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:49:10.990874    4910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:49:10.992447    4910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:49:10.992784    4910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:49:10.994267    4910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:49:10.989903    4910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:49:10.990874    4910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:49:10.992447    4910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:49:10.992784    4910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:49:10.994267    4910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:49:10.998291  452036 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:49:10.998304  452036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:49:11.037985  452036 logs.go:123] Gathering logs for container status ...
	I1227 20:49:11.038020  452036 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:49:11.067998  452036 logs.go:123] Gathering logs for kubelet ...
	I1227 20:49:11.068027  452036 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1227 20:49:11.137270  452036 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308235s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 20:49:11.137335  452036 out.go:285] * 
	W1227 20:49:11.137409  452036 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308235s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 20:49:11.137465  452036 out.go:285] * 
	W1227 20:49:11.137751  452036 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:49:11.146674  452036 out.go:203] 
	W1227 20:49:11.150219  452036 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000308235s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 20:49:11.150286  452036 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 20:49:11.150309  452036 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 20:49:11.153427  452036 out.go:203] 
	
	
	==> CRI-O <==
	Dec 27 20:41:02 force-systemd-env-859716 crio[836]: time="2025-12-27T20:41:02.178399687Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 27 20:41:02 force-systemd-env-859716 crio[836]: time="2025-12-27T20:41:02.178434376Z" level=info msg="Starting seccomp notifier watcher"
	Dec 27 20:41:02 force-systemd-env-859716 crio[836]: time="2025-12-27T20:41:02.178477887Z" level=info msg="Create NRI interface"
	Dec 27 20:41:02 force-systemd-env-859716 crio[836]: time="2025-12-27T20:41:02.178582876Z" level=info msg="built-in NRI default validator is disabled"
	Dec 27 20:41:02 force-systemd-env-859716 crio[836]: time="2025-12-27T20:41:02.17859113Z" level=info msg="runtime interface created"
	Dec 27 20:41:02 force-systemd-env-859716 crio[836]: time="2025-12-27T20:41:02.178601812Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 27 20:41:02 force-systemd-env-859716 crio[836]: time="2025-12-27T20:41:02.178607958Z" level=info msg="runtime interface starting up..."
	Dec 27 20:41:02 force-systemd-env-859716 crio[836]: time="2025-12-27T20:41:02.178613652Z" level=info msg="starting plugins..."
	Dec 27 20:41:02 force-systemd-env-859716 crio[836]: time="2025-12-27T20:41:02.178627264Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 27 20:41:02 force-systemd-env-859716 crio[836]: time="2025-12-27T20:41:02.17869771Z" level=info msg="No systemd watchdog enabled"
	Dec 27 20:41:02 force-systemd-env-859716 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 27 20:41:03 force-systemd-env-859716 crio[836]: time="2025-12-27T20:41:03.998982635Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=a13a4d1b-7a93-4088-96d8-8e43d7eaed0d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:41:03 force-systemd-env-859716 crio[836]: time="2025-12-27T20:41:03.999782359Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=06bfaeb7-d4dd-4a03-9f89-dae31b27c911 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:41:04 force-systemd-env-859716 crio[836]: time="2025-12-27T20:41:04.000283206Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=d2422fed-170c-4deb-891c-682dc63d7751 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:41:04 force-systemd-env-859716 crio[836]: time="2025-12-27T20:41:04.000781731Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=2378add2-20ec-4cd0-b7ab-27e5319b001c name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:41:04 force-systemd-env-859716 crio[836]: time="2025-12-27T20:41:04.001275201Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=f9aa279d-6f65-46b6-b5e9-0c2884e4ef91 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:41:04 force-systemd-env-859716 crio[836]: time="2025-12-27T20:41:04.001782481Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=4e2d1d01-ca5b-4aa2-aa8b-5bdc86458b08 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:41:04 force-systemd-env-859716 crio[836]: time="2025-12-27T20:41:04.002264916Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=ddc8076c-32d8-4c0f-a455-95303661dc05 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:45:08 force-systemd-env-859716 crio[836]: time="2025-12-27T20:45:08.840729389Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=3048c07b-9350-40ca-be89-15b952ecffc9 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:45:08 force-systemd-env-859716 crio[836]: time="2025-12-27T20:45:08.844708515Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=c1181245-ed1b-48de-acd2-8b1cc9d9c333 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:45:08 force-systemd-env-859716 crio[836]: time="2025-12-27T20:45:08.845356576Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=e615ce06-96a7-4d4a-b038-a7b8cb37e1e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:45:08 force-systemd-env-859716 crio[836]: time="2025-12-27T20:45:08.846120702Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=0c43ae38-aa13-48c7-97c1-c9960d3d179c name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:45:08 force-systemd-env-859716 crio[836]: time="2025-12-27T20:45:08.846622462Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=df88f65f-6846-4ea6-a4f0-d3ef0504d3dd name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:45:08 force-systemd-env-859716 crio[836]: time="2025-12-27T20:45:08.847309241Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=125b4ca7-9e39-4d96-8890-ce9fa6af2bff name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:45:08 force-systemd-env-859716 crio[836]: time="2025-12-27T20:45:08.84778893Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=71f7e4f6-6c36-4d41-81eb-12458ac02e8b name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:49:12.610806    5032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:49:12.611217    5032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:49:12.614000    5032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:49:12.614400    5032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:49:12.615634    5032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +3.163851] overlayfs: idmapped layers are currently not supported
	[Dec27 20:16] overlayfs: idmapped layers are currently not supported
	[ +35.129102] overlayfs: idmapped layers are currently not supported
	[Dec27 20:17] overlayfs: idmapped layers are currently not supported
	[Dec27 20:19] overlayfs: idmapped layers are currently not supported
	[ +36.244108] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 20:22] overlayfs: idmapped layers are currently not supported
	[Dec27 20:23] overlayfs: idmapped layers are currently not supported
	[Dec27 20:24] overlayfs: idmapped layers are currently not supported
	[Dec27 20:25] overlayfs: idmapped layers are currently not supported
	[ +35.447549] overlayfs: idmapped layers are currently not supported
	[Dec27 20:26] overlayfs: idmapped layers are currently not supported
	[Dec27 20:27] overlayfs: idmapped layers are currently not supported
	[  +6.770645] overlayfs: idmapped layers are currently not supported
	[Dec27 20:28] overlayfs: idmapped layers are currently not supported
	[ +25.872751] overlayfs: idmapped layers are currently not supported
	[Dec27 20:29] overlayfs: idmapped layers are currently not supported
	[ +32.997137] overlayfs: idmapped layers are currently not supported
	[Dec27 20:31] overlayfs: idmapped layers are currently not supported
	[Dec27 20:33] overlayfs: idmapped layers are currently not supported
	[ +33.772475] overlayfs: idmapped layers are currently not supported
	[Dec27 20:39] overlayfs: idmapped layers are currently not supported
	[Dec27 20:40] overlayfs: idmapped layers are currently not supported
	[Dec27 20:44] overlayfs: idmapped layers are currently not supported
	[Dec27 20:45] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:49:12 up  2:31,  0 user,  load average: 1.53, 1.59, 1.84
	Linux force-systemd-env-859716 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 27 20:49:10 force-systemd-env-859716 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 20:49:10 force-systemd-env-859716 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 27 20:49:10 force-systemd-env-859716 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:49:10 force-systemd-env-859716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:49:10 force-systemd-env-859716 kubelet[4874]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 20:49:10 force-systemd-env-859716 kubelet[4874]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 20:49:10 force-systemd-env-859716 kubelet[4874]: E1227 20:49:10.810521    4874 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 20:49:10 force-systemd-env-859716 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 20:49:10 force-systemd-env-859716 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 20:49:11 force-systemd-env-859716 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 27 20:49:11 force-systemd-env-859716 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:49:11 force-systemd-env-859716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:49:11 force-systemd-env-859716 kubelet[4930]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 20:49:11 force-systemd-env-859716 kubelet[4930]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 20:49:11 force-systemd-env-859716 kubelet[4930]: E1227 20:49:11.532216    4930 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 20:49:11 force-systemd-env-859716 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 20:49:11 force-systemd-env-859716 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 20:49:12 force-systemd-env-859716 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 27 20:49:12 force-systemd-env-859716 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:49:12 force-systemd-env-859716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:49:12 force-systemd-env-859716 kubelet[4972]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 20:49:12 force-systemd-env-859716 kubelet[4972]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 20:49:12 force-systemd-env-859716 kubelet[4972]: E1227 20:49:12.305944    4972 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 20:49:12 force-systemd-env-859716 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 20:49:12 force-systemd-env-859716 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-859716 -n force-systemd-env-859716
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-859716 -n force-systemd-env-859716: exit status 6 (394.624753ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 20:49:13.143616  477118 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-859716" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-859716" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-859716" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-859716
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-859716: (2.161386373s)
--- FAIL: TestForceSystemdEnv (505.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (512.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 stop --alsologtostderr -v 5
E1227 20:07:13.967324  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:07:13.972674  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:07:13.983091  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:07:14.003467  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:07:14.044593  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:07:14.125010  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:07:14.285520  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:07:14.606116  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:07:15.247080  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:07:16.527571  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:07:19.088004  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-422549 stop --alsologtostderr -v 5: (37.632344815s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 start --wait true --alsologtostderr -v 5
E1227 20:07:24.208202  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:07:34.448888  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:07:54.130042  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:07:54.929990  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:21.819150  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:35.890226  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:09:57.811397  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:12:13.966502  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:12:41.651687  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:12:54.129393  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-422549 start --wait true --alsologtostderr -v 5: exit status 105 (7m49.3126693s)

                                                
                                                
-- stdout --
	* [ha-422549] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-422549" primary control-plane node in "ha-422549" cluster
	* Pulling base image v0.0.48-1766570851-22316 ...
	* Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	* Enabled addons: 
	
	* Starting "ha-422549-m02" control-plane node in "ha-422549" cluster
	* Pulling base image v0.0.48-1766570851-22316 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:07:23.018829  319301 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:07:23.019045  319301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:07:23.019069  319301 out.go:374] Setting ErrFile to fd 2...
	I1227 20:07:23.019104  319301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:07:23.019417  319301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:07:23.019931  319301 out.go:368] Setting JSON to false
	I1227 20:07:23.020994  319301 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6595,"bootTime":1766859448,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:07:23.021172  319301 start.go:143] virtualization:  
	I1227 20:07:23.026478  319301 out.go:179] * [ha-422549] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:07:23.029624  319301 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:07:23.029657  319301 notify.go:221] Checking for updates...
	I1227 20:07:23.035732  319301 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:07:23.038626  319301 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:07:23.041521  319301 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:07:23.044303  319301 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:07:23.047245  319301 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:07:23.050815  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:07:23.050954  319301 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:07:23.074861  319301 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:07:23.074978  319301 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:07:23.134894  319301 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 20:07:23.1261821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:07:23.135004  319301 docker.go:319] overlay module found
	I1227 20:07:23.138113  319301 out.go:179] * Using the docker driver based on existing profile
	I1227 20:07:23.140925  319301 start.go:309] selected driver: docker
	I1227 20:07:23.140943  319301 start.go:928] validating driver "docker" against &{Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:07:23.141082  319301 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:07:23.141181  319301 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:07:23.197269  319301 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 20:07:23.188068839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:07:23.197711  319301 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:07:23.197745  319301 cni.go:84] Creating CNI manager for ""
	I1227 20:07:23.197797  319301 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1227 20:07:23.197857  319301 start.go:353] cluster config:
	{Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:07:23.202906  319301 out.go:179] * Starting "ha-422549" primary control-plane node in "ha-422549" cluster
	I1227 20:07:23.205659  319301 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:07:23.208577  319301 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:07:23.211352  319301 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:07:23.211401  319301 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:07:23.211416  319301 cache.go:65] Caching tarball of preloaded images
	I1227 20:07:23.211429  319301 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:07:23.211499  319301 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:07:23.211509  319301 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:07:23.211655  319301 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:07:23.229712  319301 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:07:23.229734  319301 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:07:23.229749  319301 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:07:23.229779  319301 start.go:360] acquireMachinesLock for ha-422549: {Name:mk939e8ee4c2bedc86cc6a99d76298e7b2a26ce2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:07:23.229835  319301 start.go:364] duration metric: took 35.657µs to acquireMachinesLock for "ha-422549"
	I1227 20:07:23.229869  319301 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:07:23.229878  319301 fix.go:54] fixHost starting: 
	I1227 20:07:23.230138  319301 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:07:23.246992  319301 fix.go:112] recreateIfNeeded on ha-422549: state=Stopped err=<nil>
	W1227 20:07:23.247025  319301 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:07:23.250226  319301 out.go:252] * Restarting existing docker container for "ha-422549" ...
	I1227 20:07:23.250324  319301 cli_runner.go:164] Run: docker start ha-422549
	I1227 20:07:23.503347  319301 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:07:23.526447  319301 kic.go:430] container "ha-422549" state is running.
	I1227 20:07:23.526916  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:07:23.555271  319301 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:07:23.555509  319301 machine.go:94] provisionDockerMachine start ...
	I1227 20:07:23.555569  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:23.577158  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:23.577524  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1227 20:07:23.577542  319301 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:07:23.578121  319301 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44738->127.0.0.1:33173: read: connection reset by peer
	I1227 20:07:26.720977  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549
	
	I1227 20:07:26.721006  319301 ubuntu.go:182] provisioning hostname "ha-422549"
	I1227 20:07:26.721067  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:26.738818  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:26.739131  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1227 20:07:26.739148  319301 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549 && echo "ha-422549" | sudo tee /etc/hostname
	I1227 20:07:26.886109  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549
	
	I1227 20:07:26.886195  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:26.903863  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:26.904173  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1227 20:07:26.904194  319301 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:07:27.041724  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:07:27.041750  319301 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:07:27.041786  319301 ubuntu.go:190] setting up certificates
	I1227 20:07:27.041803  319301 provision.go:84] configureAuth start
	I1227 20:07:27.041869  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:07:27.060364  319301 provision.go:143] copyHostCerts
	I1227 20:07:27.060422  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:07:27.060455  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:07:27.060473  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:07:27.060550  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:07:27.060645  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:07:27.060668  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:07:27.060679  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:07:27.060709  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:07:27.060761  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:07:27.060783  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:07:27.060791  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:07:27.060818  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:07:27.060870  319301 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549 san=[127.0.0.1 192.168.49.2 ha-422549 localhost minikube]
	I1227 20:07:27.239677  319301 provision.go:177] copyRemoteCerts
	I1227 20:07:27.239745  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:07:27.239800  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:27.259369  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:07:27.364829  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:07:27.364890  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:07:27.382288  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:07:27.382362  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1227 20:07:27.399154  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:07:27.399213  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:07:27.417099  319301 provision.go:87] duration metric: took 375.277706ms to configureAuth
	I1227 20:07:27.417133  319301 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:07:27.417387  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:07:27.417527  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:27.434441  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:27.434764  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1227 20:07:27.434789  319301 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:07:27.806912  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:07:27.806938  319301 machine.go:97] duration metric: took 4.251419469s to provisionDockerMachine
	I1227 20:07:27.806950  319301 start.go:293] postStartSetup for "ha-422549" (driver="docker")
	I1227 20:07:27.806961  319301 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:07:27.807018  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:07:27.807063  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:27.827185  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:07:27.924757  319301 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:07:27.927910  319301 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:07:27.927939  319301 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:07:27.927951  319301 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:07:27.928034  319301 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:07:27.928163  319301 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:07:27.928176  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:07:27.928319  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:07:27.935125  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:07:27.951297  319301 start.go:296] duration metric: took 144.328969ms for postStartSetup
	I1227 20:07:27.951425  319301 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:07:27.951489  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:27.968679  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:07:28.062963  319301 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:07:28.068245  319301 fix.go:56] duration metric: took 4.838360246s for fixHost
	I1227 20:07:28.068273  319301 start.go:83] releasing machines lock for "ha-422549", held for 4.838415218s
	I1227 20:07:28.068391  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:07:28.086189  319301 ssh_runner.go:195] Run: cat /version.json
	I1227 20:07:28.086242  319301 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:07:28.086251  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:28.086297  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:28.112515  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:07:28.119040  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:07:28.213229  319301 ssh_runner.go:195] Run: systemctl --version
	I1227 20:07:28.307265  319301 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:07:28.344982  319301 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:07:28.349307  319301 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:07:28.349416  319301 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:07:28.357039  319301 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:07:28.357061  319301 start.go:496] detecting cgroup driver to use...
	I1227 20:07:28.357091  319301 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:07:28.357187  319301 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:07:28.372341  319301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:07:28.385115  319301 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:07:28.385188  319301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:07:28.400803  319301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:07:28.413692  319301 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:07:28.520682  319301 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:07:28.638372  319301 docker.go:234] disabling docker service ...
	I1227 20:07:28.638476  319301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:07:28.652726  319301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:07:28.665221  319301 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:07:28.769753  319301 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:07:28.887106  319301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:07:28.901250  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:07:28.915594  319301 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:07:28.915656  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.923915  319301 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:07:28.924023  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.932251  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.940443  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.948974  319301 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:07:28.956576  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.964831  319301 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.973077  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.981210  319301 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:07:28.988289  319301 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:07:28.995419  319301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:07:29.102806  319301 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:07:29.272446  319301 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:07:29.272527  319301 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:07:29.276338  319301 start.go:574] Will wait 60s for crictl version
	I1227 20:07:29.276409  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:07:29.279905  319301 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:07:29.303871  319301 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:07:29.303984  319301 ssh_runner.go:195] Run: crio --version
	I1227 20:07:29.330697  319301 ssh_runner.go:195] Run: crio --version
	I1227 20:07:29.362339  319301 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:07:29.365125  319301 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:07:29.381233  319301 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:07:29.385291  319301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:07:29.396534  319301 kubeadm.go:884] updating cluster {Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:07:29.396713  319301 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:07:29.396766  319301 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:07:29.430374  319301 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:07:29.430399  319301 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:07:29.430457  319301 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:07:29.459783  319301 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:07:29.459805  319301 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:07:29.459813  319301 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I1227 20:07:29.459907  319301 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:07:29.459984  319301 ssh_runner.go:195] Run: crio config
	I1227 20:07:29.529648  319301 cni.go:84] Creating CNI manager for ""
	I1227 20:07:29.529684  319301 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1227 20:07:29.529702  319301 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:07:29.529745  319301 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422549 NodeName:ha-422549 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:07:29.529880  319301 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422549"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:07:29.529906  319301 kube-vip.go:115] generating kube-vip config ...
	I1227 20:07:29.529981  319301 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 20:07:29.541823  319301 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:07:29.541926  319301 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 20:07:29.541995  319301 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:07:29.549349  319301 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:07:29.549419  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1227 20:07:29.556490  319301 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1227 20:07:29.568355  319301 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:07:29.580790  319301 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1227 20:07:29.593175  319301 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 20:07:29.606173  319301 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:07:29.609837  319301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:07:29.619217  319301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:07:29.735123  319301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:07:29.750389  319301 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.2
	I1227 20:07:29.750412  319301 certs.go:195] generating shared ca certs ...
	I1227 20:07:29.750427  319301 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:29.750619  319301 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:07:29.750682  319301 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:07:29.750699  319301 certs.go:257] generating profile certs ...
	I1227 20:07:29.750812  319301 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key
	I1227 20:07:29.751056  319301 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.743f7ef3
	I1227 20:07:29.751077  319301 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt.743f7ef3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1227 20:07:30.216987  319301 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt.743f7ef3 ...
	I1227 20:07:30.217024  319301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt.743f7ef3: {Name:mk5110c0017b8f4cda34fa079f107b622b8f9c47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:30.217226  319301 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.743f7ef3 ...
	I1227 20:07:30.217243  319301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.743f7ef3: {Name:mkb171a8982d80a151baacbc9fe03fa941196fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:30.217342  319301 certs.go:382] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt.743f7ef3 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt
	I1227 20:07:30.217509  319301 certs.go:386] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.743f7ef3 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key
	I1227 20:07:30.217676  319301 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key
	I1227 20:07:30.217696  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:07:30.217721  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:07:30.217741  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:07:30.217759  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:07:30.217776  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:07:30.217799  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:07:30.217821  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:07:30.217837  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:07:30.217893  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:07:30.217940  319301 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:07:30.217953  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:07:30.217981  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:07:30.218009  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:07:30.218040  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:07:30.218095  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:07:30.218156  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:07:30.218174  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:07:30.218188  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:07:30.218745  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:07:30.239060  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:07:30.258056  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:07:30.279983  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:07:30.299163  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:07:30.317066  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:07:30.333792  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:07:30.363380  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:07:30.383880  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:07:30.402563  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:07:30.424158  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:07:30.441364  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:07:30.455028  319301 ssh_runner.go:195] Run: openssl version
	I1227 20:07:30.462193  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:07:30.476783  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:07:30.488736  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:07:30.492787  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:07:30.492869  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:07:30.601338  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:07:30.618710  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:07:30.629367  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:07:30.641908  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:07:30.646861  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:07:30.646946  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:07:30.713797  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:07:30.723031  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:07:30.735659  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:07:30.746061  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:07:30.750487  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:07:30.750578  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:07:30.818577  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:07:30.827800  319301 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:07:30.835007  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:07:30.906833  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:07:30.969599  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:07:31.044468  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:07:31.106453  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:07:31.155733  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:07:31.197366  319301 kubeadm.go:401] StartCluster: {Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:07:31.197537  319301 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:07:31.197613  319301 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:07:31.226634  319301 cri.go:96] found id: "c3f87ac29708d39b5580f953e8ccc765b36b830cf405bc7750b8afe798a15a77"
	I1227 20:07:31.226665  319301 cri.go:96] found id: "79f65bc2e1dbcf7ebe07acaf2143b45f059da3390e107fc3eb87595ccc5f920d"
	I1227 20:07:31.226671  319301 cri.go:96] found id: "dd811e752da4c2025246e605ecc1690aba8141353e20fb91cdad4468a1c059f9"
	I1227 20:07:31.226675  319301 cri.go:96] found id: "feeed30c26dbbb06391e6c43a6d6041af28ce218eaf23eec819dc38cda9444e8"
	I1227 20:07:31.226679  319301 cri.go:96] found id: "bbf24a80fc638071d98a1cc08ab823b436cc206cb456eac7a8be7958d11889db"
	I1227 20:07:31.226683  319301 cri.go:96] found id: ""
	I1227 20:07:31.226745  319301 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:07:31.244824  319301 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:07:31Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:07:31.244903  319301 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:07:31.257811  319301 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:07:31.257842  319301 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:07:31.257908  319301 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:07:31.270645  319301 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:07:31.271073  319301 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-422549" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:07:31.271185  319301 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-272475/kubeconfig needs updating (will repair): [kubeconfig missing "ha-422549" cluster setting kubeconfig missing "ha-422549" context setting]
	I1227 20:07:31.271518  319301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:31.272112  319301 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 20:07:31.272794  319301 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 20:07:31.272816  319301 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 20:07:31.272823  319301 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 20:07:31.272851  319301 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1227 20:07:31.272828  319301 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 20:07:31.272895  319301 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 20:07:31.272900  319301 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 20:07:31.273215  319301 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:07:31.284048  319301 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1227 20:07:31.284081  319301 kubeadm.go:602] duration metric: took 26.232251ms to restartPrimaryControlPlane
	I1227 20:07:31.284090  319301 kubeadm.go:403] duration metric: took 86.73489ms to StartCluster
	I1227 20:07:31.284107  319301 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:31.284175  319301 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:07:31.284780  319301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:31.284997  319301 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:07:31.285023  319301 start.go:242] waiting for startup goroutines ...
	I1227 20:07:31.285032  319301 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:07:31.285574  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:07:31.290925  319301 out.go:179] * Enabled addons: 
	I1227 20:07:31.294082  319301 addons.go:530] duration metric: took 9.037764ms for enable addons: enabled=[]
	I1227 20:07:31.294137  319301 start.go:247] waiting for cluster config update ...
	I1227 20:07:31.294152  319301 start.go:256] writing updated cluster config ...
	I1227 20:07:31.297568  319301 out.go:203] 
	I1227 20:07:31.300820  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:07:31.300937  319301 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:07:31.304320  319301 out.go:179] * Starting "ha-422549-m02" control-plane node in "ha-422549" cluster
	I1227 20:07:31.306983  319301 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:07:31.309971  319301 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:07:31.312773  319301 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:07:31.312796  319301 cache.go:65] Caching tarball of preloaded images
	I1227 20:07:31.312889  319301 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:07:31.312906  319301 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:07:31.313029  319301 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:07:31.313257  319301 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:07:31.349637  319301 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:07:31.349662  319301 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:07:31.349676  319301 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:07:31.349708  319301 start.go:360] acquireMachinesLock for ha-422549-m02: {Name:mk8fc7aa5d6c41749cc4b9db094e3fd243d8b868 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:07:31.349765  319301 start.go:364] duration metric: took 37.299µs to acquireMachinesLock for "ha-422549-m02"
	I1227 20:07:31.349791  319301 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:07:31.349796  319301 fix.go:54] fixHost starting: m02
	I1227 20:07:31.350055  319301 cli_runner.go:164] Run: docker container inspect ha-422549-m02 --format={{.State.Status}}
	I1227 20:07:31.391676  319301 fix.go:112] recreateIfNeeded on ha-422549-m02: state=Stopped err=<nil>
	W1227 20:07:31.391706  319301 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:07:31.394953  319301 out.go:252] * Restarting existing docker container for "ha-422549-m02" ...
	I1227 20:07:31.395043  319301 cli_runner.go:164] Run: docker start ha-422549-m02
	I1227 20:07:31.777922  319301 cli_runner.go:164] Run: docker container inspect ha-422549-m02 --format={{.State.Status}}
	I1227 20:07:31.805184  319301 kic.go:430] container "ha-422549-m02" state is running.
	I1227 20:07:31.805591  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:07:31.841697  319301 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:07:31.841951  319301 machine.go:94] provisionDockerMachine start ...
	I1227 20:07:31.842022  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:31.865663  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:31.865982  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1227 20:07:31.865998  319301 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:07:31.866584  319301 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58412->127.0.0.1:33178: read: connection reset by peer
	I1227 20:07:35.045099  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m02
	
	I1227 20:07:35.045161  319301 ubuntu.go:182] provisioning hostname "ha-422549-m02"
	I1227 20:07:35.045260  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:35.074417  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:35.074732  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1227 20:07:35.074750  319301 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549-m02 && echo "ha-422549-m02" | sudo tee /etc/hostname
	I1227 20:07:35.272951  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m02
	
	I1227 20:07:35.273095  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:35.310855  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:35.311167  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1227 20:07:35.311187  319301 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:07:35.489398  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:07:35.489483  319301 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:07:35.489515  319301 ubuntu.go:190] setting up certificates
	I1227 20:07:35.489552  319301 provision.go:84] configureAuth start
	I1227 20:07:35.489651  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:07:35.519140  319301 provision.go:143] copyHostCerts
	I1227 20:07:35.519180  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:07:35.519212  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:07:35.519219  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:07:35.519305  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:07:35.519384  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:07:35.519400  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:07:35.519405  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:07:35.519428  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:07:35.519467  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:07:35.519482  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:07:35.519486  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:07:35.519508  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:07:35.519552  319301 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549-m02 san=[127.0.0.1 192.168.49.3 ha-422549-m02 localhost minikube]
	I1227 20:07:35.673804  319301 provision.go:177] copyRemoteCerts
	I1227 20:07:35.676274  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:07:35.676362  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:35.700203  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:07:35.810686  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:07:35.810802  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 20:07:35.827198  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:07:35.827254  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:07:35.847940  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:07:35.848040  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:07:35.870095  319301 provision.go:87] duration metric: took 380.509887ms to configureAuth
	I1227 20:07:35.870124  319301 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:07:35.870422  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:07:35.870563  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:35.893611  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:35.893918  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1227 20:07:35.893932  319301 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:07:36.282435  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:07:36.282459  319301 machine.go:97] duration metric: took 4.440490595s to provisionDockerMachine
	I1227 20:07:36.282470  319301 start.go:293] postStartSetup for "ha-422549-m02" (driver="docker")
	I1227 20:07:36.282505  319301 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:07:36.282595  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:07:36.282666  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:36.301003  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:07:36.402628  319301 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:07:36.406068  319301 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:07:36.406097  319301 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:07:36.406108  319301 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:07:36.406247  319301 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:07:36.406355  319301 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:07:36.406371  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:07:36.406502  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:07:36.414126  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:07:36.431291  319301 start.go:296] duration metric: took 148.805898ms for postStartSetup
	I1227 20:07:36.431373  319301 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:07:36.431417  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:36.449358  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:07:36.546713  319301 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:07:36.551629  319301 fix.go:56] duration metric: took 5.201823785s for fixHost
	I1227 20:07:36.551655  319301 start.go:83] releasing machines lock for "ha-422549-m02", held for 5.20187627s
	I1227 20:07:36.551729  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:07:36.571695  319301 out.go:179] * Found network options:
	I1227 20:07:36.574736  319301 out.go:179]   - NO_PROXY=192.168.49.2
	W1227 20:07:36.577654  319301 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:07:36.577694  319301 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 20:07:36.577781  319301 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:07:36.577827  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:36.578074  319301 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:07:36.578134  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:36.598248  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:07:36.598898  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:07:36.873888  319301 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:07:36.879823  319301 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:07:36.879937  319301 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:07:36.899888  319301 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:07:36.899953  319301 start.go:496] detecting cgroup driver to use...
	I1227 20:07:36.899997  319301 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:07:36.900076  319301 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:07:36.928970  319301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:07:36.947727  319301 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:07:36.947845  319301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:07:36.967863  319301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:07:36.998332  319301 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:07:37.167619  319301 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:07:37.326628  319301 docker.go:234] disabling docker service ...
	I1227 20:07:37.326748  319301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:07:37.341981  319301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:07:37.354777  319301 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:07:37.613409  319301 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:07:37.870750  319301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:07:37.886152  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:07:37.906254  319301 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:07:37.906377  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.926031  319301 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:07:37.926143  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.937485  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.946425  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.958890  319301 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:07:37.968858  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.978269  319301 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.986277  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.995011  319301 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:07:38.002468  319301 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:07:38.010027  319301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:07:38.207437  319301 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:09:08.647737  319301 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.440260784s)
	I1227 20:09:08.647767  319301 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:09:08.647821  319301 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:09:08.651981  319301 start.go:574] Will wait 60s for crictl version
	I1227 20:09:08.652048  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:09:08.655690  319301 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:09:08.681479  319301 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:09:08.681565  319301 ssh_runner.go:195] Run: crio --version
	I1227 20:09:08.713332  319301 ssh_runner.go:195] Run: crio --version
	I1227 20:09:08.746336  319301 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:09:08.749205  319301 out.go:179]   - env NO_PROXY=192.168.49.2
	I1227 20:09:08.752182  319301 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:09:08.768090  319301 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:09:08.771937  319301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:09:08.781622  319301 mustload.go:66] Loading cluster: ha-422549
	I1227 20:09:08.781869  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:09:08.782144  319301 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:09:08.798634  319301 host.go:66] Checking if "ha-422549" exists ...
	I1227 20:09:08.798913  319301 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.3
	I1227 20:09:08.798926  319301 certs.go:195] generating shared ca certs ...
	I1227 20:09:08.798941  319301 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:09:08.799067  319301 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:09:08.799116  319301 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:09:08.799129  319301 certs.go:257] generating profile certs ...
	I1227 20:09:08.799210  319301 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key
	I1227 20:09:08.799280  319301 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.982843aa
	I1227 20:09:08.799324  319301 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key
	I1227 20:09:08.799337  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:09:08.799350  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:09:08.799367  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:09:08.799386  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:09:08.799406  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:09:08.799422  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:09:08.799438  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:09:08.799453  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:09:08.799510  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:09:08.799546  319301 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:09:08.799559  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:09:08.799588  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:09:08.799617  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:09:08.799646  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:09:08.799694  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:09:08.799727  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:09:08.799744  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:09:08.799758  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:09:08.799822  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:09:08.817939  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:09:08.909783  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1227 20:09:08.913788  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1227 20:09:08.922116  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1227 20:09:08.925553  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1227 20:09:08.933735  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1227 20:09:08.937584  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1227 20:09:08.946742  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1227 20:09:08.951033  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1227 20:09:08.959969  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1227 20:09:08.963648  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1227 20:09:08.971803  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1227 20:09:08.975349  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1227 20:09:08.983445  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:09:09.001559  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:09:09.020775  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:09:09.041958  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:09:09.059931  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:09:09.076796  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:09:09.095447  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:09:09.113037  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:09:09.130903  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:09:09.148555  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:09:09.167075  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:09:09.184251  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1227 20:09:09.197053  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1227 20:09:09.209869  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1227 20:09:09.223329  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1227 20:09:09.236109  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1227 20:09:09.249524  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1227 20:09:09.262558  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (728 bytes)
	I1227 20:09:09.278766  319301 ssh_runner.go:195] Run: openssl version
	I1227 20:09:09.288173  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:09:09.303263  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:09:09.312839  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:09:09.317343  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:09:09.317435  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:09:09.358946  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:09:09.366603  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:09:09.374144  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:09:09.381566  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:09:09.385396  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:09:09.385483  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:09:09.427186  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:09:09.435033  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:09:09.442740  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:09:09.450736  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:09:09.455313  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:09:09.455406  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:09:09.506456  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:09:09.515191  319301 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:09:09.519143  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:09:09.560830  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:09:09.601733  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:09:09.642802  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:09:09.683557  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:09:09.724343  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:09:09.764937  319301 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.35.0 crio true true} ...
	I1227 20:09:09.765044  319301 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:09:09.765076  319301 kube-vip.go:115] generating kube-vip config ...
	I1227 20:09:09.765126  319301 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 20:09:09.777907  319301 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:09:09.778008  319301 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 20:09:09.778101  319301 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:09:09.785542  319301 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:09:09.785669  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1227 20:09:09.793814  319301 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 20:09:09.808509  319301 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:09:09.822210  319301 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 20:09:09.836025  319301 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:09:09.840416  319301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:09:09.851735  319301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:09:09.987416  319301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:09:10.000958  319301 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:09:10.001514  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:09:10.006801  319301 out.go:179] * Verifying Kubernetes components...
	I1227 20:09:10.009655  319301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:09:10.156826  319301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:09:10.171179  319301 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1227 20:09:10.171261  319301 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1227 20:09:10.171542  319301 node_ready.go:35] waiting up to 6m0s for node "ha-422549-m02" to be "Ready" ...
	I1227 20:09:13.107692  319301 node_ready.go:49] node "ha-422549-m02" is "Ready"
	I1227 20:09:13.107720  319301 node_ready.go:38] duration metric: took 2.936159281s for node "ha-422549-m02" to be "Ready" ...
	I1227 20:09:13.107734  319301 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:09:13.107789  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:13.607926  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:14.107987  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:14.607959  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:15.108981  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:15.607952  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:16.108673  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:16.608170  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:17.108757  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:17.608081  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:18.108738  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:18.608607  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:19.108699  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:19.608389  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:20.107908  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:20.608001  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:21.108548  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:21.608334  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:22.108180  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:22.607875  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:23.108675  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:23.608625  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:24.108180  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:24.608668  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:25.108754  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:25.607950  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:26.107930  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:26.607944  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:27.108744  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:27.608613  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:28.108398  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:28.608347  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:29.108513  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:29.607943  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:30.108298  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:30.607986  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:31.108862  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:31.608852  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:32.108838  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:32.608448  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:33.108526  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:33.608595  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:34.108250  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:34.607930  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:35.107952  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:35.608214  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:36.108509  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:36.608114  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:37.108454  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:37.607937  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:38.108594  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:38.607928  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:39.107995  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:39.608876  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:40.107937  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:40.607935  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:41.108437  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:41.607967  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:42.110329  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:42.608527  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:43.108197  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:43.608003  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:44.108494  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:44.608788  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:45.108779  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:45.608786  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:46.108080  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:46.608527  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:47.108485  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:47.608412  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:48.108174  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:48.608559  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:49.108719  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:49.608778  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:50.108396  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:50.608188  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:51.108854  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:51.607920  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:52.108260  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:52.607897  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:53.108165  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:53.608820  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:54.107921  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:54.608807  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:55.107966  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:55.608683  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:56.108704  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:56.608641  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:57.107949  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:57.608891  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:58.107911  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:58.607913  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:59.108124  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:59.608080  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:00.126668  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:00.607936  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:01.107972  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:01.607964  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:02.108918  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:02.608274  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:03.108889  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:03.607948  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:04.108838  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:04.608617  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:05.108707  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:05.608552  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:06.108350  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:06.607927  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:07.108601  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:07.607942  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:08.108292  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:08.607954  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:09.108836  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:09.608829  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:10.108562  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:10.108721  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:10.138615  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:10.138637  319301 cri.go:96] found id: ""
	I1227 20:10:10.138646  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:10.138711  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:10.143115  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:10.143189  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:10.173558  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:10.173579  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:10.173584  319301 cri.go:96] found id: ""
	I1227 20:10:10.173592  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:10.173653  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:10.178008  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:10.182191  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:10.182272  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:10.220643  319301 cri.go:96] found id: ""
	I1227 20:10:10.220668  319301 logs.go:282] 0 containers: []
	W1227 20:10:10.220677  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:10.220684  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:10.220746  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:10.250139  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:10.250162  319301 cri.go:96] found id: ""
	I1227 20:10:10.250170  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:10.250228  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:10.253966  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:10.254039  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:10.290311  319301 cri.go:96] found id: ""
	I1227 20:10:10.290334  319301 logs.go:282] 0 containers: []
	W1227 20:10:10.290343  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:10.290349  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:10.290422  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:10.319925  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:10.319948  319301 cri.go:96] found id: ""
	I1227 20:10:10.319974  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:10.320031  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:10.323821  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:10.323902  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:10.352069  319301 cri.go:96] found id: ""
	I1227 20:10:10.352091  319301 logs.go:282] 0 containers: []
	W1227 20:10:10.352100  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:10.352115  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:10.352127  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:10.451345  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:10.451385  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:10.469929  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:10.469961  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:10.875866  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:10.868032    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.868914    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.869711    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.870583    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.872332    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:10.868032    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.868914    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.869711    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.870583    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.872332    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:10.875894  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:10.875909  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:10.936407  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:10.936442  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:10.983671  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:10.983707  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:11.017260  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:11.017294  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:11.052563  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:11.052594  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:11.130184  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:11.130222  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:11.162524  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:11.162557  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:13.706075  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:13.716624  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:13.716698  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:13.747368  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:13.747388  319301 cri.go:96] found id: ""
	I1227 20:10:13.747396  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:13.747456  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:13.751096  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:13.751188  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:13.777717  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:13.777790  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:13.777802  319301 cri.go:96] found id: ""
	I1227 20:10:13.777811  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:13.777878  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:13.781548  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:13.785083  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:13.785193  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:13.811036  319301 cri.go:96] found id: ""
	I1227 20:10:13.811063  319301 logs.go:282] 0 containers: []
	W1227 20:10:13.811072  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:13.811079  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:13.811137  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:13.837822  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:13.837845  319301 cri.go:96] found id: ""
	I1227 20:10:13.837854  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:13.837911  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:13.841739  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:13.841856  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:13.868264  319301 cri.go:96] found id: ""
	I1227 20:10:13.868341  319301 logs.go:282] 0 containers: []
	W1227 20:10:13.868364  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:13.868387  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:13.868471  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:13.894511  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:13.894535  319301 cri.go:96] found id: ""
	I1227 20:10:13.894543  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:13.894621  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:13.898655  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:13.898764  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:13.924022  319301 cri.go:96] found id: ""
	I1227 20:10:13.924047  319301 logs.go:282] 0 containers: []
	W1227 20:10:13.924062  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:13.924077  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:13.924089  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:13.956536  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:13.956567  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:14.057854  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:14.057894  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:14.139219  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:14.129809    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.130897    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.132418    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.132833    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.134384    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:14.129809    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.130897    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.132418    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.132833    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.134384    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:14.139251  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:14.139265  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:14.182716  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:14.182750  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:14.208224  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:14.208301  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:14.225984  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:14.226016  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:14.256249  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:14.256314  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:14.301058  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:14.301201  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:14.329017  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:14.329046  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:16.906959  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:16.917912  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:16.917986  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:16.947235  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:16.947299  319301 cri.go:96] found id: ""
	I1227 20:10:16.947322  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:16.947404  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:16.951076  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:16.951204  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:16.984938  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:16.984962  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:16.984968  319301 cri.go:96] found id: ""
	I1227 20:10:16.984976  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:16.985053  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:16.988800  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:16.992512  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:16.992592  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:17.026764  319301 cri.go:96] found id: ""
	I1227 20:10:17.026789  319301 logs.go:282] 0 containers: []
	W1227 20:10:17.026798  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:17.026804  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:17.026875  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:17.053717  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:17.053741  319301 cri.go:96] found id: ""
	I1227 20:10:17.053749  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:17.053803  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:17.057601  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:17.057691  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:17.088432  319301 cri.go:96] found id: ""
	I1227 20:10:17.088455  319301 logs.go:282] 0 containers: []
	W1227 20:10:17.088464  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:17.088470  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:17.088529  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:17.115961  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:17.115985  319301 cri.go:96] found id: ""
	I1227 20:10:17.115995  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:17.116046  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:17.119890  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:17.119963  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:17.148631  319301 cri.go:96] found id: ""
	I1227 20:10:17.148654  319301 logs.go:282] 0 containers: []
	W1227 20:10:17.148663  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:17.148678  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:17.148694  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:17.240100  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:17.240138  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:17.259693  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:17.259725  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:17.291635  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:17.291666  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:17.368588  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:17.368624  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:17.407623  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:17.407652  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:17.475650  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:17.467352    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.467760    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.469497    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.470032    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.471718    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:17.467352    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.467760    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.469497    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.470032    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.471718    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:17.475719  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:17.475739  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:17.516294  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:17.516328  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:17.559509  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:17.559544  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:17.587296  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:17.587332  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:20.115472  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:20.126778  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:20.126847  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:20.153825  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:20.153850  319301 cri.go:96] found id: ""
	I1227 20:10:20.153859  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:20.153919  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:20.157682  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:20.157759  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:20.189317  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:20.189386  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:20.189420  319301 cri.go:96] found id: ""
	I1227 20:10:20.189493  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:20.189582  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:20.193669  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:20.197374  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:20.197473  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:20.237542  319301 cri.go:96] found id: ""
	I1227 20:10:20.237570  319301 logs.go:282] 0 containers: []
	W1227 20:10:20.237579  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:20.237585  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:20.237643  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:20.274313  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:20.274381  319301 cri.go:96] found id: ""
	I1227 20:10:20.274417  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:20.274509  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:20.279651  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:20.279718  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:20.306525  319301 cri.go:96] found id: ""
	I1227 20:10:20.306586  319301 logs.go:282] 0 containers: []
	W1227 20:10:20.306610  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:20.306636  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:20.306707  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:20.333808  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:20.333829  319301 cri.go:96] found id: ""
	I1227 20:10:20.333837  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:20.333927  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:20.337575  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:20.337677  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:20.372581  319301 cri.go:96] found id: ""
	I1227 20:10:20.372607  319301 logs.go:282] 0 containers: []
	W1227 20:10:20.372621  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:20.372636  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:20.372647  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:20.467758  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:20.467794  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:20.486495  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:20.486527  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:20.553188  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:20.545238    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.545758    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.547330    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.548070    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.549570    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:20.545238    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.545758    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.547330    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.548070    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.549570    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:20.553253  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:20.553282  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:20.580345  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:20.580374  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:20.626310  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:20.626345  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:20.670432  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:20.670467  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:20.696170  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:20.696199  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:20.730948  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:20.730976  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:20.805291  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:20.805325  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:23.351696  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:23.362369  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:23.362478  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:23.391572  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:23.391649  319301 cri.go:96] found id: ""
	I1227 20:10:23.391664  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:23.391739  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:23.395547  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:23.395671  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:23.422118  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:23.422141  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:23.422147  319301 cri.go:96] found id: ""
	I1227 20:10:23.422155  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:23.422235  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:23.426008  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:23.429336  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:23.429411  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:23.459272  319301 cri.go:96] found id: ""
	I1227 20:10:23.459299  319301 logs.go:282] 0 containers: []
	W1227 20:10:23.459308  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:23.459316  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:23.459398  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:23.484648  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:23.484671  319301 cri.go:96] found id: ""
	I1227 20:10:23.484679  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:23.484755  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:23.488422  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:23.488501  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:23.512953  319301 cri.go:96] found id: ""
	I1227 20:10:23.512978  319301 logs.go:282] 0 containers: []
	W1227 20:10:23.512987  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:23.512994  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:23.513049  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:23.538866  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:23.538889  319301 cri.go:96] found id: ""
	I1227 20:10:23.538898  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:23.538952  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:23.542487  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:23.542556  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:23.568959  319301 cri.go:96] found id: ""
	I1227 20:10:23.568985  319301 logs.go:282] 0 containers: []
	W1227 20:10:23.568994  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:23.569010  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:23.569023  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:23.614313  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:23.614346  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:23.639847  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:23.639875  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:23.671907  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:23.671936  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:23.702365  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:23.702394  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:23.783203  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:23.783246  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:23.884915  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:23.884948  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:23.902305  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:23.902337  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:23.970687  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:23.961560    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.962112    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.963576    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.964060    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.965635    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:23.961560    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.962112    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.963576    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.964060    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.965635    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:23.970722  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:23.970735  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:24.004792  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:24.004819  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:26.564703  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:26.575059  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:26.575143  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:26.604294  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:26.604317  319301 cri.go:96] found id: ""
	I1227 20:10:26.604326  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:26.604381  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:26.608875  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:26.608942  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:26.634574  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:26.634595  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:26.634600  319301 cri.go:96] found id: ""
	I1227 20:10:26.634607  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:26.634660  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:26.638317  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:26.641718  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:26.641787  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:26.670771  319301 cri.go:96] found id: ""
	I1227 20:10:26.670793  319301 logs.go:282] 0 containers: []
	W1227 20:10:26.670802  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:26.670808  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:26.670867  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:26.697344  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:26.697376  319301 cri.go:96] found id: ""
	I1227 20:10:26.697386  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:26.697491  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:26.701237  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:26.701344  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:26.726058  319301 cri.go:96] found id: ""
	I1227 20:10:26.726125  319301 logs.go:282] 0 containers: []
	W1227 20:10:26.726140  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:26.726147  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:26.726209  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:26.752574  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:26.752594  319301 cri.go:96] found id: ""
	I1227 20:10:26.752602  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:26.752658  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:26.756386  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:26.756457  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:26.786442  319301 cri.go:96] found id: ""
	I1227 20:10:26.786465  319301 logs.go:282] 0 containers: []
	W1227 20:10:26.786474  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:26.786488  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:26.786500  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:26.814367  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:26.814441  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:26.839989  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:26.840061  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:26.876712  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:26.876796  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:26.918742  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:26.918784  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:26.961668  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:26.961699  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:26.994123  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:26.994151  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:27.085553  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:27.085590  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:27.186397  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:27.186433  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:27.204121  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:27.204153  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:27.273016  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:27.262702    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.263577    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.265227    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.266801    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.267439    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:27.262702    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.263577    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.265227    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.266801    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.267439    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:29.773264  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:29.783744  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:29.783817  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:29.813744  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:29.813806  319301 cri.go:96] found id: ""
	I1227 20:10:29.813829  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:29.813919  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:29.818669  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:29.818786  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:29.844784  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:29.844802  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:29.844806  319301 cri.go:96] found id: ""
	I1227 20:10:29.844814  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:29.844868  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:29.848603  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:29.852078  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:29.852143  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:29.878788  319301 cri.go:96] found id: ""
	I1227 20:10:29.878814  319301 logs.go:282] 0 containers: []
	W1227 20:10:29.878823  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:29.878830  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:29.878890  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:29.908178  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:29.908200  319301 cri.go:96] found id: ""
	I1227 20:10:29.908209  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:29.908264  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:29.911793  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:29.911884  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:29.952724  319301 cri.go:96] found id: ""
	I1227 20:10:29.952749  319301 logs.go:282] 0 containers: []
	W1227 20:10:29.952759  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:29.952765  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:29.952855  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:30.008208  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:30.008289  319301 cri.go:96] found id: ""
	I1227 20:10:30.008312  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:30.008390  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:30.012672  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:30.012766  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:30.063201  319301 cri.go:96] found id: ""
	I1227 20:10:30.063273  319301 logs.go:282] 0 containers: []
	W1227 20:10:30.063297  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:30.063334  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:30.063369  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:30.152059  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:30.152097  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:30.188985  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:30.189011  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:30.288999  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:30.289079  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:30.307734  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:30.307764  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:30.354973  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:30.355008  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:30.425745  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:30.417740    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.418295    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.419807    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.420357    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.421985    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:30.417740    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.418295    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.419807    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.420357    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.421985    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:30.425773  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:30.425789  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:30.454739  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:30.454771  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:30.511002  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:30.511040  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:30.537495  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:30.537526  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:33.065805  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:33.076295  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:33.076418  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:33.103323  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:33.103346  319301 cri.go:96] found id: ""
	I1227 20:10:33.103356  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:33.103410  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:33.107007  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:33.107081  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:33.133167  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:33.133190  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:33.133195  319301 cri.go:96] found id: ""
	I1227 20:10:33.133203  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:33.133264  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:33.137298  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:33.141081  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:33.141152  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:33.167830  319301 cri.go:96] found id: ""
	I1227 20:10:33.167854  319301 logs.go:282] 0 containers: []
	W1227 20:10:33.167862  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:33.167869  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:33.167929  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:33.196531  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:33.196555  319301 cri.go:96] found id: ""
	I1227 20:10:33.196564  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:33.196621  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:33.200165  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:33.200267  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:33.226904  319301 cri.go:96] found id: ""
	I1227 20:10:33.226933  319301 logs.go:282] 0 containers: []
	W1227 20:10:33.226943  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:33.226950  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:33.227009  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:33.254111  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:33.254132  319301 cri.go:96] found id: ""
	I1227 20:10:33.254141  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:33.254197  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:33.258995  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:33.259128  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:33.285296  319301 cri.go:96] found id: ""
	I1227 20:10:33.285320  319301 logs.go:282] 0 containers: []
	W1227 20:10:33.285330  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:33.285350  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:33.285363  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:33.379312  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:33.379349  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:33.397669  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:33.397703  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:33.475423  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:33.464091    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.464710    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.467091    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.469890    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.471637    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:33.464091    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.464710    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.467091    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.469890    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.471637    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:33.475445  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:33.475462  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:33.505362  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:33.505391  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:33.549322  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:33.549353  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:33.592755  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:33.592789  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:33.625076  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:33.625105  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:33.676663  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:33.676692  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:33.703598  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:33.703627  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:36.283392  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:36.293854  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:36.293938  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:36.321425  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:36.321524  319301 cri.go:96] found id: ""
	I1227 20:10:36.321538  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:36.321604  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:36.325322  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:36.325393  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:36.354160  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:36.354182  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:36.354187  319301 cri.go:96] found id: ""
	I1227 20:10:36.354194  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:36.354250  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:36.357942  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:36.361261  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:36.361336  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:36.387328  319301 cri.go:96] found id: ""
	I1227 20:10:36.387356  319301 logs.go:282] 0 containers: []
	W1227 20:10:36.387366  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:36.387373  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:36.387431  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:36.418785  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:36.418807  319301 cri.go:96] found id: ""
	I1227 20:10:36.418815  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:36.418871  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:36.422631  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:36.422709  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:36.452773  319301 cri.go:96] found id: ""
	I1227 20:10:36.452799  319301 logs.go:282] 0 containers: []
	W1227 20:10:36.452807  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:36.452814  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:36.452873  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:36.478409  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:36.478432  319301 cri.go:96] found id: ""
	I1227 20:10:36.478440  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:36.478515  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:36.482226  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:36.482329  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:36.510113  319301 cri.go:96] found id: ""
	I1227 20:10:36.510139  319301 logs.go:282] 0 containers: []
	W1227 20:10:36.510148  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:36.510162  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:36.510206  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:36.528485  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:36.528518  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:36.596104  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:36.586542    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.587371    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.589128    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.589804    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.591834    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:36.586542    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.587371    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.589128    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.589804    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.591834    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:36.596128  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:36.596153  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:36.656568  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:36.656646  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:36.685002  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:36.685040  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:36.719044  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:36.719072  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:36.815628  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:36.815664  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:36.845372  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:36.845407  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:36.892923  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:36.892962  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:36.920168  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:36.920205  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:39.498228  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:39.509127  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:39.509200  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:39.535429  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:39.535450  319301 cri.go:96] found id: ""
	I1227 20:10:39.535458  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:39.535511  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.539036  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:39.539115  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:39.565370  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:39.565395  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:39.565401  319301 cri.go:96] found id: ""
	I1227 20:10:39.565411  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:39.565505  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.569317  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.572838  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:39.572913  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:39.600208  319301 cri.go:96] found id: ""
	I1227 20:10:39.600233  319301 logs.go:282] 0 containers: []
	W1227 20:10:39.600243  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:39.600249  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:39.600359  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:39.627924  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:39.627947  319301 cri.go:96] found id: ""
	I1227 20:10:39.627955  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:39.628038  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.631825  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:39.631929  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:39.670875  319301 cri.go:96] found id: ""
	I1227 20:10:39.670898  319301 logs.go:282] 0 containers: []
	W1227 20:10:39.670907  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:39.670949  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:39.671032  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:39.698935  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:39.698963  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:39.698968  319301 cri.go:96] found id: ""
	I1227 20:10:39.698976  319301 logs.go:282] 2 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:39.699057  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.702755  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.706280  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:39.706367  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:39.732144  319301 cri.go:96] found id: ""
	I1227 20:10:39.732171  319301 logs.go:282] 0 containers: []
	W1227 20:10:39.732192  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:39.732202  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:39.732218  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:39.833062  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:39.833097  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:39.851039  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:39.851169  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:39.936210  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:39.936253  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:40.017614  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:40.018998  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:40.077844  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:40.077881  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:40.191560  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:40.191604  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:40.229430  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:40.229483  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:40.316177  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:40.307077    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.308580    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.309399    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.310789    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.312661    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:40.307077    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.308580    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.309399    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.310789    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.312661    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:40.316202  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:40.316215  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:40.351544  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:40.351584  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:40.379852  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:40.379880  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:42.911718  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:42.922519  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:42.922590  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:42.949680  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:42.949705  319301 cri.go:96] found id: ""
	I1227 20:10:42.949714  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:42.949773  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:42.953773  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:42.953858  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:42.986307  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:42.986333  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:42.986340  319301 cri.go:96] found id: ""
	I1227 20:10:42.986347  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:42.986401  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:42.989939  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:42.993412  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:42.993511  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:43.027198  319301 cri.go:96] found id: ""
	I1227 20:10:43.027224  319301 logs.go:282] 0 containers: []
	W1227 20:10:43.027244  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:43.027251  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:43.027314  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:43.054716  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:43.054739  319301 cri.go:96] found id: ""
	I1227 20:10:43.054748  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:43.054803  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:43.059284  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:43.059357  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:43.093962  319301 cri.go:96] found id: ""
	I1227 20:10:43.093986  319301 logs.go:282] 0 containers: []
	W1227 20:10:43.093995  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:43.094002  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:43.094060  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:43.122219  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:43.122257  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:43.122263  319301 cri.go:96] found id: ""
	I1227 20:10:43.122270  319301 logs.go:282] 2 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:43.122337  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:43.126232  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:43.129862  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:43.129978  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:43.156857  319301 cri.go:96] found id: ""
	I1227 20:10:43.156882  319301 logs.go:282] 0 containers: []
	W1227 20:10:43.156891  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:43.156901  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:43.156914  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:43.174975  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:43.175005  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:43.219964  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:43.220004  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:43.245562  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:43.245591  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:43.276688  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:43.276770  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:43.358338  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:43.358380  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:43.402206  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:43.402234  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:43.499249  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:43.499289  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:43.576572  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:43.568454    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.569067    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.570849    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.571386    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.572871    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:43.568454    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.569067    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.570849    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.571386    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.572871    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:43.576591  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:43.576605  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:43.604599  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:43.604686  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:43.650961  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:43.651038  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:46.181580  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:46.192165  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:46.192233  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:46.218480  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:46.218500  319301 cri.go:96] found id: ""
	I1227 20:10:46.218509  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:46.218563  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.222189  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:46.222263  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:46.253302  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:46.253327  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:46.253332  319301 cri.go:96] found id: ""
	I1227 20:10:46.253340  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:46.253398  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.257309  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.260898  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:46.260974  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:46.289145  319301 cri.go:96] found id: ""
	I1227 20:10:46.289218  319301 logs.go:282] 0 containers: []
	W1227 20:10:46.289241  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:46.289262  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:46.289352  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:46.318927  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:46.318948  319301 cri.go:96] found id: ""
	I1227 20:10:46.318956  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:46.319015  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.322605  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:46.322674  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:46.354035  319301 cri.go:96] found id: ""
	I1227 20:10:46.354061  319301 logs.go:282] 0 containers: []
	W1227 20:10:46.354071  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:46.354077  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:46.354168  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:46.384710  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:46.384734  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:46.384740  319301 cri.go:96] found id: ""
	I1227 20:10:46.384748  319301 logs.go:282] 2 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:46.384803  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.388496  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.392532  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:46.392611  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:46.421588  319301 cri.go:96] found id: ""
	I1227 20:10:46.421664  319301 logs.go:282] 0 containers: []
	W1227 20:10:46.421686  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:46.421709  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:46.421746  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:46.439228  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:46.439330  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:46.484770  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:46.484806  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:46.519247  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:46.519273  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:46.597066  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:46.597101  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:46.634009  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:46.634040  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:46.701472  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:46.693690    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.694466    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.695987    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.696422    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.697877    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:46.693690    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.694466    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.695987    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.696422    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.697877    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:46.701496  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:46.701512  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:46.729296  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:46.729326  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:46.774639  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:46.774678  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:46.799969  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:46.800005  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:46.826163  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:46.826192  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:49.429141  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:49.439610  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:49.439705  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:49.470260  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:49.470283  319301 cri.go:96] found id: ""
	I1227 20:10:49.470292  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:49.470350  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.474256  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:49.474343  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:49.501740  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:49.501762  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:49.501767  319301 cri.go:96] found id: ""
	I1227 20:10:49.501774  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:49.501850  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.505843  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.509390  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:49.509489  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:49.543998  319301 cri.go:96] found id: ""
	I1227 20:10:49.544022  319301 logs.go:282] 0 containers: []
	W1227 20:10:49.544041  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:49.544049  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:49.544107  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:49.570494  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:49.570517  319301 cri.go:96] found id: ""
	I1227 20:10:49.570525  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:49.570581  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.574401  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:49.574471  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:49.603448  319301 cri.go:96] found id: ""
	I1227 20:10:49.603475  319301 logs.go:282] 0 containers: []
	W1227 20:10:49.603486  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:49.603500  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:49.603573  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:49.633356  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:49.633379  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:49.633385  319301 cri.go:96] found id: ""
	I1227 20:10:49.633392  319301 logs.go:282] 2 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:49.633474  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.637216  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.641370  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:49.641472  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:49.669518  319301 cri.go:96] found id: ""
	I1227 20:10:49.669557  319301 logs.go:282] 0 containers: []
	W1227 20:10:49.669567  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:49.669576  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:49.669588  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:49.696361  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:49.696389  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:49.721155  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:49.721184  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:49.753420  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:49.753489  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:49.832989  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:49.833025  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:49.874986  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:49.875013  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:49.978286  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:49.978321  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:49.997322  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:49.997351  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:50.080526  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:50.072015    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.072678    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.074595    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.075259    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.076874    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:50.072015    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.072678    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.074595    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.075259    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.076874    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:50.080546  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:50.080560  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:50.139866  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:50.139902  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:50.184649  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:50.184682  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:52.713968  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:52.726778  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:52.726855  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:52.758017  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:52.758040  319301 cri.go:96] found id: ""
	I1227 20:10:52.758049  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:52.758104  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:52.761780  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:52.761855  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:52.789053  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:52.789076  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:52.789081  319301 cri.go:96] found id: ""
	I1227 20:10:52.789088  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:52.789140  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:52.792812  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:52.796144  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:52.796211  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:52.825853  319301 cri.go:96] found id: ""
	I1227 20:10:52.825883  319301 logs.go:282] 0 containers: []
	W1227 20:10:52.825892  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:52.825898  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:52.825955  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:52.851800  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:52.851820  319301 cri.go:96] found id: ""
	I1227 20:10:52.851828  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:52.851881  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:52.855382  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:52.855455  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:52.885699  319301 cri.go:96] found id: ""
	I1227 20:10:52.885721  319301 logs.go:282] 0 containers: []
	W1227 20:10:52.885736  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:52.885742  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:52.885800  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:52.911251  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:52.911316  319301 cri.go:96] found id: ""
	I1227 20:10:52.911339  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:10:52.911402  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:52.914760  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:52.914841  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:52.939685  319301 cri.go:96] found id: ""
	I1227 20:10:52.939718  319301 logs.go:282] 0 containers: []
	W1227 20:10:52.939728  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:52.939742  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:52.939789  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:53.033951  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:53.033990  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:53.052877  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:53.052906  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:53.096670  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:53.096715  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:53.128695  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:53.128722  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:53.161100  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:53.161130  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:53.227545  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:53.218833    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.219420    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.221028    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.221951    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.223525    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:53.218833    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.219420    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.221028    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.221951    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.223525    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:53.227617  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:53.227640  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:53.255984  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:53.256125  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:53.313035  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:53.313074  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:53.338975  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:53.339057  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:55.915383  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:55.925492  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:55.925565  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:55.952010  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:55.952028  319301 cri.go:96] found id: ""
	I1227 20:10:55.952037  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:55.952092  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:55.955593  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:55.955667  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:55.986538  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:55.986561  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:55.986567  319301 cri.go:96] found id: ""
	I1227 20:10:55.986574  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:55.986628  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:55.990714  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:55.995050  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:55.995121  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:56.024488  319301 cri.go:96] found id: ""
	I1227 20:10:56.024565  319301 logs.go:282] 0 containers: []
	W1227 20:10:56.024588  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:56.024612  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:56.024696  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:56.056966  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:56.057039  319301 cri.go:96] found id: ""
	I1227 20:10:56.057065  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:56.057155  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:56.061997  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:56.062234  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:56.089345  319301 cri.go:96] found id: ""
	I1227 20:10:56.089372  319301 logs.go:282] 0 containers: []
	W1227 20:10:56.089381  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:56.089388  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:56.089488  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:56.117758  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:56.117782  319301 cri.go:96] found id: ""
	I1227 20:10:56.117790  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:10:56.117845  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:56.121319  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:56.121432  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:56.147067  319301 cri.go:96] found id: ""
	I1227 20:10:56.147092  319301 logs.go:282] 0 containers: []
	W1227 20:10:56.147102  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:56.147115  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:56.147130  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:56.224179  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:56.224218  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:56.256694  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:56.256721  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:56.283858  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:56.283889  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:56.353505  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:56.342078    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.343458    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.346135    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.347096    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.347948    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:56.342078    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.343458    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.346135    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.347096    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.347948    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:56.353534  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:56.353548  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:56.399836  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:56.399870  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:56.494637  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:56.494677  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:56.528262  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:56.528292  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:56.577163  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:56.577198  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:56.605916  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:56.605945  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:59.134704  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:59.144988  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:59.145094  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:59.170826  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:59.170846  319301 cri.go:96] found id: ""
	I1227 20:10:59.170859  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:59.170916  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:59.174542  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:59.174618  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:59.204712  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:59.204734  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:59.204738  319301 cri.go:96] found id: ""
	I1227 20:10:59.204746  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:59.204800  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:59.208625  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:59.212119  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:59.212200  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:59.241075  319301 cri.go:96] found id: ""
	I1227 20:10:59.241150  319301 logs.go:282] 0 containers: []
	W1227 20:10:59.241174  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:59.241195  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:59.241312  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:59.277168  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:59.277252  319301 cri.go:96] found id: ""
	I1227 20:10:59.277274  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:59.277366  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:59.281934  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:59.282029  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:59.307601  319301 cri.go:96] found id: ""
	I1227 20:10:59.307627  319301 logs.go:282] 0 containers: []
	W1227 20:10:59.307636  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:59.307643  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:59.307704  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:59.341899  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:59.341923  319301 cri.go:96] found id: ""
	I1227 20:10:59.341931  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:10:59.341999  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:59.345734  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:59.345844  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:59.371593  319301 cri.go:96] found id: ""
	I1227 20:10:59.371661  319301 logs.go:282] 0 containers: []
	W1227 20:10:59.371683  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:59.371716  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:59.371755  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:59.464618  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:59.464654  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:59.483758  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:59.483793  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:59.555654  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:59.546856    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.547308    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.548491    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.548938    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.550344    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:59.546856    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.547308    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.548491    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.548938    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.550344    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:59.555678  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:59.555696  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:59.583971  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:59.584004  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:59.635084  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:59.635118  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:59.662345  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:59.662375  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:59.726915  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:59.726950  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:59.754060  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:59.754094  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:59.836493  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:59.836534  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:02.376222  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:02.386794  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:02.386868  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:02.419031  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:02.419054  319301 cri.go:96] found id: ""
	I1227 20:11:02.419062  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:02.419118  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:02.423033  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:02.423106  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:02.448867  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:02.448891  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:02.448896  319301 cri.go:96] found id: ""
	I1227 20:11:02.448903  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:02.448957  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:02.452561  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:02.455963  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:02.456070  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:02.484254  319301 cri.go:96] found id: ""
	I1227 20:11:02.484281  319301 logs.go:282] 0 containers: []
	W1227 20:11:02.484290  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:02.484297  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:02.484357  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:02.511483  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:02.511506  319301 cri.go:96] found id: ""
	I1227 20:11:02.511515  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:02.511580  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:02.515291  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:02.515364  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:02.542839  319301 cri.go:96] found id: ""
	I1227 20:11:02.542866  319301 logs.go:282] 0 containers: []
	W1227 20:11:02.542886  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:02.542894  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:02.543025  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:02.576471  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:02.576505  319301 cri.go:96] found id: ""
	I1227 20:11:02.576519  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:02.576578  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:02.580126  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:02.580205  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:02.610225  319301 cri.go:96] found id: ""
	I1227 20:11:02.610252  319301 logs.go:282] 0 containers: []
	W1227 20:11:02.610261  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:02.610275  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:02.610316  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:02.640738  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:02.640766  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:02.688087  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:02.688120  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:02.714149  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:02.714175  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:02.743134  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:02.743161  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:02.822169  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:02.822206  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:02.894561  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:02.894595  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:02.936069  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:02.936096  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:03.036539  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:03.036573  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:03.054449  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:03.054480  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:03.132045  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:03.124246    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.125028    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.126504    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.127054    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.128486    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:03.124246    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.125028    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.126504    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.127054    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.128486    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:05.633596  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:05.644441  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:05.644564  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:05.671495  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:05.671520  319301 cri.go:96] found id: ""
	I1227 20:11:05.671528  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:05.671603  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:05.675058  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:05.675148  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:05.699421  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:05.699443  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:05.699448  319301 cri.go:96] found id: ""
	I1227 20:11:05.699456  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:05.699512  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:05.703223  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:05.706661  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:05.706747  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:05.731295  319301 cri.go:96] found id: ""
	I1227 20:11:05.731319  319301 logs.go:282] 0 containers: []
	W1227 20:11:05.731328  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:05.731334  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:05.731409  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:05.758394  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:05.758427  319301 cri.go:96] found id: ""
	I1227 20:11:05.758435  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:05.758500  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:05.762213  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:05.762304  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:05.788439  319301 cri.go:96] found id: ""
	I1227 20:11:05.788465  319301 logs.go:282] 0 containers: []
	W1227 20:11:05.788473  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:05.788480  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:05.788546  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:05.814115  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:05.814137  319301 cri.go:96] found id: ""
	I1227 20:11:05.814145  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:05.814199  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:05.817823  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:05.817893  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:05.844939  319301 cri.go:96] found id: ""
	I1227 20:11:05.844963  319301 logs.go:282] 0 containers: []
	W1227 20:11:05.844973  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:05.844988  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:05.845002  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:05.863023  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:05.863054  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:05.932754  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:05.924777    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.925338    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.926988    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.927561    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.928952    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:05.924777    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.925338    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.926988    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.927561    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.928952    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:05.932785  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:05.932802  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:05.960574  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:05.960604  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:06.004048  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:06.004082  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:06.055406  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:06.055441  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:06.082613  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:06.082643  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:06.115617  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:06.115646  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:06.149699  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:06.149729  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:06.250917  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:06.250950  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:08.830917  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:08.841316  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:08.841404  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:08.871386  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:08.871407  319301 cri.go:96] found id: ""
	I1227 20:11:08.871415  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:08.871483  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:08.875249  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:08.875334  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:08.905155  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:08.905178  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:08.905182  319301 cri.go:96] found id: ""
	I1227 20:11:08.905189  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:08.905256  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:08.909157  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:08.912623  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:08.912696  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:08.940125  319301 cri.go:96] found id: ""
	I1227 20:11:08.940151  319301 logs.go:282] 0 containers: []
	W1227 20:11:08.940161  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:08.940168  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:08.940228  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:08.979078  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:08.979099  319301 cri.go:96] found id: ""
	I1227 20:11:08.979115  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:08.979172  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:08.982993  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:08.983079  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:09.010456  319301 cri.go:96] found id: ""
	I1227 20:11:09.010482  319301 logs.go:282] 0 containers: []
	W1227 20:11:09.010491  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:09.010498  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:09.010559  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:09.046193  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:09.046226  319301 cri.go:96] found id: ""
	I1227 20:11:09.046235  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:09.046293  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:09.050361  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:09.050429  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:09.076865  319301 cri.go:96] found id: ""
	I1227 20:11:09.076892  319301 logs.go:282] 0 containers: []
	W1227 20:11:09.076901  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:09.076917  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:09.076929  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:09.103766  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:09.103793  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:09.121384  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:09.121412  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:09.190959  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:09.182712    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.183470    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.185037    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.185570    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.187248    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:09.182712    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.183470    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.185037    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.185570    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.187248    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:09.191026  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:09.191058  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:09.238609  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:09.238648  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:09.332804  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:09.332844  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:09.374845  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:09.374874  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:09.475731  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:09.475770  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:09.505046  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:09.505075  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:09.550742  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:09.550779  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:12.077490  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:12.089114  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:12.089187  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:12.117965  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:12.117987  319301 cri.go:96] found id: ""
	I1227 20:11:12.117995  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:12.118048  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:12.121654  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:12.121727  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:12.150616  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:12.150645  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:12.150650  319301 cri.go:96] found id: ""
	I1227 20:11:12.150658  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:12.150714  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:12.154526  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:12.157975  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:12.158059  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:12.188379  319301 cri.go:96] found id: ""
	I1227 20:11:12.188406  319301 logs.go:282] 0 containers: []
	W1227 20:11:12.188415  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:12.188421  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:12.188479  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:12.214099  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:12.214125  319301 cri.go:96] found id: ""
	I1227 20:11:12.214134  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:12.214187  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:12.217805  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:12.217871  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:12.244974  319301 cri.go:96] found id: ""
	I1227 20:11:12.244999  319301 logs.go:282] 0 containers: []
	W1227 20:11:12.245008  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:12.245015  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:12.245071  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:12.281031  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:12.281071  319301 cri.go:96] found id: ""
	I1227 20:11:12.281079  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:12.281146  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:12.284926  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:12.285004  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:12.311055  319301 cri.go:96] found id: ""
	I1227 20:11:12.311079  319301 logs.go:282] 0 containers: []
	W1227 20:11:12.311088  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:12.311101  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:12.311113  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:12.330032  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:12.330065  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:12.359973  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:12.360000  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:12.405129  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:12.405163  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:12.460783  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:12.460817  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:12.488201  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:12.488230  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:12.565465  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:12.565502  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:12.662969  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:12.663007  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:12.735836  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:12.727495    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.728366    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.730010    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.730324    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.731834    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:12.727495    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.728366    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.730010    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.730324    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.731834    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:12.735859  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:12.735872  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:12.763143  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:12.763168  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:15.305823  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:15.318015  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:15.318113  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:15.347994  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:15.348017  319301 cri.go:96] found id: ""
	I1227 20:11:15.348026  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:15.348089  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:15.351955  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:15.352056  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:15.378004  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:15.378026  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:15.378031  319301 cri.go:96] found id: ""
	I1227 20:11:15.378038  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:15.378091  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:15.381599  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:15.384824  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:15.384889  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:15.409597  319301 cri.go:96] found id: ""
	I1227 20:11:15.409673  319301 logs.go:282] 0 containers: []
	W1227 20:11:15.409695  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:15.409716  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:15.409805  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:15.436026  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:15.436091  319301 cri.go:96] found id: ""
	I1227 20:11:15.436114  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:15.436205  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:15.439709  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:15.439776  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:15.472950  319301 cri.go:96] found id: ""
	I1227 20:11:15.472974  319301 logs.go:282] 0 containers: []
	W1227 20:11:15.472983  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:15.472990  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:15.473047  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:15.503060  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:15.503083  319301 cri.go:96] found id: ""
	I1227 20:11:15.503092  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:15.503166  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:15.506772  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:15.506841  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:15.531805  319301 cri.go:96] found id: ""
	I1227 20:11:15.531828  319301 logs.go:282] 0 containers: []
	W1227 20:11:15.531837  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:15.531849  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:15.531861  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:15.557217  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:15.557253  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:15.583522  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:15.583550  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:15.646957  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:15.646994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:15.677573  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:15.677601  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:15.763080  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:15.763117  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:15.795445  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:15.795473  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:15.895027  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:15.895063  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:15.914036  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:15.914065  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:15.990029  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:15.981434    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.982226    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.983747    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.984333    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.986074    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:15.981434    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.982226    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.983747    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.984333    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.986074    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:15.990048  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:15.990061  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:18.535347  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:18.545638  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:18.545712  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:18.573096  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:18.573125  319301 cri.go:96] found id: ""
	I1227 20:11:18.573135  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:18.573190  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:18.577413  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:18.577512  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:18.604633  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:18.604657  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:18.604662  319301 cri.go:96] found id: ""
	I1227 20:11:18.604670  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:18.604724  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:18.610098  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:18.613744  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:18.613821  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:18.645090  319301 cri.go:96] found id: ""
	I1227 20:11:18.645116  319301 logs.go:282] 0 containers: []
	W1227 20:11:18.645126  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:18.645132  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:18.645191  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:18.671681  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:18.671705  319301 cri.go:96] found id: ""
	I1227 20:11:18.671713  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:18.671768  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:18.675284  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:18.675356  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:18.701086  319301 cri.go:96] found id: ""
	I1227 20:11:18.701109  319301 logs.go:282] 0 containers: []
	W1227 20:11:18.701117  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:18.701123  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:18.701183  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:18.733157  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:18.733176  319301 cri.go:96] found id: ""
	I1227 20:11:18.733185  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:18.733237  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:18.736898  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:18.736978  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:18.761319  319301 cri.go:96] found id: ""
	I1227 20:11:18.761340  319301 logs.go:282] 0 containers: []
	W1227 20:11:18.761349  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:18.761362  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:18.761374  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:18.793077  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:18.793104  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:18.819425  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:18.819453  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:18.859846  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:18.859919  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:18.938269  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:18.938303  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:19.040817  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:19.040856  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:19.059170  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:19.059202  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:19.132074  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:19.121248    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.122916    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.123583    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.125207    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.125782    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:19.121248    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.122916    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.123583    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.125207    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.125782    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:19.132096  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:19.132111  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:19.179880  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:19.179916  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:19.223928  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:19.223963  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:21.759181  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:21.769762  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:21.769833  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:21.800302  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:21.800323  319301 cri.go:96] found id: ""
	I1227 20:11:21.800332  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:21.800395  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:21.804375  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:21.804458  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:21.830687  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:21.830711  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:21.830717  319301 cri.go:96] found id: ""
	I1227 20:11:21.830724  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:21.830779  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:21.834661  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:21.838097  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:21.838198  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:21.864157  319301 cri.go:96] found id: ""
	I1227 20:11:21.864183  319301 logs.go:282] 0 containers: []
	W1227 20:11:21.864193  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:21.864199  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:21.864292  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:21.890722  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:21.890747  319301 cri.go:96] found id: ""
	I1227 20:11:21.890756  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:21.890812  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:21.894377  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:21.894447  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:21.921902  319301 cri.go:96] found id: ""
	I1227 20:11:21.921932  319301 logs.go:282] 0 containers: []
	W1227 20:11:21.921941  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:21.921948  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:21.922013  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:21.948157  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:21.948181  319301 cri.go:96] found id: ""
	I1227 20:11:21.948190  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:21.948246  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:21.951860  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:21.951928  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:21.979147  319301 cri.go:96] found id: ""
	I1227 20:11:21.979171  319301 logs.go:282] 0 containers: []
	W1227 20:11:21.979181  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:21.979222  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:21.979242  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:22.077716  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:22.077768  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:22.161527  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:22.149113    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.149745    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.154386    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.154984    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.157780    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:22.149113    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.149745    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.154386    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.154984    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.157780    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:22.161553  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:22.161566  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:22.193359  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:22.193386  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:22.247574  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:22.247611  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:22.302993  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:22.303034  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:22.332035  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:22.332064  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:22.358225  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:22.358265  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:22.437089  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:22.437124  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:22.455750  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:22.455781  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:24.990837  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:25.001120  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:25.001190  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:25.040369  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:25.040388  319301 cri.go:96] found id: ""
	I1227 20:11:25.040396  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:25.040452  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:25.044321  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:25.044388  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:25.075240  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:25.075264  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:25.075268  319301 cri.go:96] found id: ""
	I1227 20:11:25.075276  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:25.075331  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:25.079221  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:25.083046  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:25.083117  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:25.111437  319301 cri.go:96] found id: ""
	I1227 20:11:25.111466  319301 logs.go:282] 0 containers: []
	W1227 20:11:25.111475  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:25.111482  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:25.111540  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:25.139474  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:25.139498  319301 cri.go:96] found id: ""
	I1227 20:11:25.139507  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:25.139572  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:25.143469  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:25.143540  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:25.177080  319301 cri.go:96] found id: ""
	I1227 20:11:25.177103  319301 logs.go:282] 0 containers: []
	W1227 20:11:25.177112  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:25.177119  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:25.177235  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:25.204123  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:25.204146  319301 cri.go:96] found id: ""
	I1227 20:11:25.204155  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:25.204238  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:25.207906  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:25.207978  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:25.233127  319301 cri.go:96] found id: ""
	I1227 20:11:25.233150  319301 logs.go:282] 0 containers: []
	W1227 20:11:25.233160  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:25.233175  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:25.233187  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:25.252764  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:25.252793  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:25.302886  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:25.302924  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:25.327231  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:25.327259  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:25.357720  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:25.357749  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:25.396486  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:25.396513  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:25.469872  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:25.461875    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.462332    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.464006    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.464571    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.466153    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:25.461875    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.462332    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.464006    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.464571    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.466153    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:25.469894  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:25.469907  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:25.498176  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:25.498204  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:25.547245  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:25.547279  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:25.629600  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:25.629639  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:28.230549  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:28.241564  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:28.241641  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:28.279080  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:28.279110  319301 cri.go:96] found id: ""
	I1227 20:11:28.279119  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:28.279185  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:28.284314  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:28.284405  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:28.316322  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:28.316389  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:28.316408  319301 cri.go:96] found id: ""
	I1227 20:11:28.316436  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:28.316522  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:28.320358  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:28.323910  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:28.324004  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:28.354101  319301 cri.go:96] found id: ""
	I1227 20:11:28.354172  319301 logs.go:282] 0 containers: []
	W1227 20:11:28.354195  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:28.354221  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:28.354308  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:28.381894  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:28.381933  319301 cri.go:96] found id: ""
	I1227 20:11:28.381944  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:28.382007  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:28.385565  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:28.385640  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:28.412036  319301 cri.go:96] found id: ""
	I1227 20:11:28.412063  319301 logs.go:282] 0 containers: []
	W1227 20:11:28.412072  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:28.412079  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:28.412136  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:28.437133  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:28.437154  319301 cri.go:96] found id: ""
	I1227 20:11:28.437162  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:28.437216  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:28.440922  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:28.441006  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:28.469470  319301 cri.go:96] found id: ""
	I1227 20:11:28.469495  319301 logs.go:282] 0 containers: []
	W1227 20:11:28.469505  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:28.469518  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:28.469531  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:28.512248  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:28.512281  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:28.538806  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:28.538834  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:28.615719  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:28.615756  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:28.651963  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:28.651992  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:28.753577  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:28.753616  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:28.770745  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:28.770778  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:28.798843  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:28.798878  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:28.867106  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:28.858730    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.859584    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.861103    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.861408    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.863356    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:28.858730    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.859584    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.861103    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.861408    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.863356    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:28.867124  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:28.867137  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:28.897868  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:28.897897  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:31.455673  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:31.466341  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:31.466412  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:31.494286  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:31.494305  319301 cri.go:96] found id: ""
	I1227 20:11:31.494312  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:31.494368  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:31.499152  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:31.499229  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:31.525626  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:31.525647  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:31.525651  319301 cri.go:96] found id: ""
	I1227 20:11:31.525666  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:31.525721  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:31.529291  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:31.532543  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:31.532612  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:31.558153  319301 cri.go:96] found id: ""
	I1227 20:11:31.558178  319301 logs.go:282] 0 containers: []
	W1227 20:11:31.558187  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:31.558193  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:31.558274  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:31.585024  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:31.585047  319301 cri.go:96] found id: ""
	I1227 20:11:31.585055  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:31.585109  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:31.588772  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:31.588841  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:31.615373  319301 cri.go:96] found id: ""
	I1227 20:11:31.615398  319301 logs.go:282] 0 containers: []
	W1227 20:11:31.615408  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:31.615414  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:31.615474  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:31.644548  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:31.644571  319301 cri.go:96] found id: ""
	I1227 20:11:31.644579  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:31.644634  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:31.648326  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:31.648396  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:31.674106  319301 cri.go:96] found id: ""
	I1227 20:11:31.674128  319301 logs.go:282] 0 containers: []
	W1227 20:11:31.674137  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:31.674152  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:31.674165  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:31.769885  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:31.769924  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:31.787798  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:31.787829  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:31.840240  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:31.840276  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:31.883880  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:31.883914  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:31.912615  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:31.912645  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:31.993762  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:31.993796  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:32.038771  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:32.038807  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:32.113504  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:32.105141    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.106007    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.106783    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.108406    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.108703    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:32.105141    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.106007    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.106783    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.108406    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.108703    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:32.113531  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:32.113545  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:32.145482  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:32.145508  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:34.675972  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:34.687181  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:34.687251  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:34.713741  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:34.713768  319301 cri.go:96] found id: ""
	I1227 20:11:34.713776  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:34.713837  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:34.717422  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:34.717525  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:34.742801  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:34.742824  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:34.742829  319301 cri.go:96] found id: ""
	I1227 20:11:34.742836  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:34.742890  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:34.746901  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:34.750347  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:34.750438  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:34.776122  319301 cri.go:96] found id: ""
	I1227 20:11:34.776156  319301 logs.go:282] 0 containers: []
	W1227 20:11:34.776165  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:34.776173  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:34.776241  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:34.801663  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:34.801687  319301 cri.go:96] found id: ""
	I1227 20:11:34.801696  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:34.801752  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:34.805521  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:34.805600  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:34.839033  319301 cri.go:96] found id: ""
	I1227 20:11:34.839059  319301 logs.go:282] 0 containers: []
	W1227 20:11:34.839068  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:34.839075  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:34.839164  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:34.875359  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:34.875380  319301 cri.go:96] found id: ""
	I1227 20:11:34.875389  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:34.875444  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:34.879108  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:34.879203  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:34.904808  319301 cri.go:96] found id: ""
	I1227 20:11:34.904831  319301 logs.go:282] 0 containers: []
	W1227 20:11:34.904839  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:34.904882  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:34.904902  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:35.001157  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:35.001197  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:35.036396  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:35.036492  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:35.100412  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:35.100452  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:35.130486  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:35.130514  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:35.212133  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:35.212170  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:35.261425  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:35.261489  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:35.279972  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:35.280002  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:35.344789  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:35.336875    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.337423    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.338974    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.339514    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.340959    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:35.336875    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.337423    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.338974    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.339514    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.340959    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:35.344811  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:35.344826  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:35.388398  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:35.388438  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:37.916139  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:37.926579  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:37.926656  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:37.957965  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:37.957990  319301 cri.go:96] found id: ""
	I1227 20:11:37.958011  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:37.958064  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:37.961819  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:37.961939  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:37.990732  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:37.990756  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:37.990763  319301 cri.go:96] found id: ""
	I1227 20:11:37.990774  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:37.990832  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:37.994865  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:37.998563  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:37.998657  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:38.029180  319301 cri.go:96] found id: ""
	I1227 20:11:38.029206  319301 logs.go:282] 0 containers: []
	W1227 20:11:38.029228  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:38.029235  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:38.029302  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:38.058262  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:38.058287  319301 cri.go:96] found id: ""
	I1227 20:11:38.058295  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:38.058390  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:38.062798  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:38.062895  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:38.093594  319301 cri.go:96] found id: ""
	I1227 20:11:38.093630  319301 logs.go:282] 0 containers: []
	W1227 20:11:38.093641  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:38.093647  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:38.093723  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:38.122677  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:38.122700  319301 cri.go:96] found id: ""
	I1227 20:11:38.122710  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:38.122784  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:38.126481  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:38.126556  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:38.152399  319301 cri.go:96] found id: ""
	I1227 20:11:38.152425  319301 logs.go:282] 0 containers: []
	W1227 20:11:38.152434  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:38.152447  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:38.152459  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:38.169834  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:38.169865  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:38.236553  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:38.228832    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.229398    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.230976    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.231455    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.232939    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:38.228832    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.229398    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.230976    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.231455    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.232939    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:38.236574  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:38.236587  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:38.283907  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:38.283942  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:38.327559  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:38.327595  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:38.354915  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:38.354944  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:38.385535  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:38.385567  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:38.482920  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:38.482955  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:38.513709  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:38.513737  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:38.541063  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:38.541092  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:41.120061  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:41.130482  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:41.130560  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:41.157933  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:41.157995  319301 cri.go:96] found id: ""
	I1227 20:11:41.158011  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:41.158068  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:41.161515  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:41.161587  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:41.186761  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:41.186784  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:41.186789  319301 cri.go:96] found id: ""
	I1227 20:11:41.186796  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:41.186853  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:41.190548  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:41.194929  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:41.195019  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:41.225573  319301 cri.go:96] found id: ""
	I1227 20:11:41.225600  319301 logs.go:282] 0 containers: []
	W1227 20:11:41.225609  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:41.225615  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:41.225678  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:41.255736  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:41.255810  319301 cri.go:96] found id: ""
	I1227 20:11:41.255833  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:41.255924  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:41.259619  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:41.259730  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:41.293635  319301 cri.go:96] found id: ""
	I1227 20:11:41.293658  319301 logs.go:282] 0 containers: []
	W1227 20:11:41.293667  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:41.293674  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:41.293736  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:41.325226  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:41.325248  319301 cri.go:96] found id: ""
	I1227 20:11:41.325257  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:41.325311  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:41.328850  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:41.328919  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:41.356320  319301 cri.go:96] found id: ""
	I1227 20:11:41.356345  319301 logs.go:282] 0 containers: []
	W1227 20:11:41.356354  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:41.356370  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:41.356383  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:41.384750  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:41.384777  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:41.438279  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:41.438315  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:41.496771  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:41.496814  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:41.525343  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:41.525373  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:41.558207  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:41.558235  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:41.657075  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:41.657112  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:41.689798  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:41.689828  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:41.769585  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:41.769620  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:41.787874  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:41.787906  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:41.852555  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:41.844441    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.845015    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.846678    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.847233    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.849010    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:41.844441    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.845015    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.846678    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.847233    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.849010    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:44.353586  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:44.364496  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:44.364591  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:44.396750  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:44.396823  319301 cri.go:96] found id: ""
	I1227 20:11:44.396848  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:44.396920  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:44.400610  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:44.400687  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:44.428171  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:44.428250  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:44.428271  319301 cri.go:96] found id: ""
	I1227 20:11:44.428296  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:44.428411  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:44.432219  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:44.435828  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:44.435901  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:44.464904  319301 cri.go:96] found id: ""
	I1227 20:11:44.464931  319301 logs.go:282] 0 containers: []
	W1227 20:11:44.464953  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:44.464960  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:44.465019  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:44.494508  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:44.494537  319301 cri.go:96] found id: ""
	I1227 20:11:44.494546  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:44.494602  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:44.498485  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:44.498588  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:44.526221  319301 cri.go:96] found id: ""
	I1227 20:11:44.526249  319301 logs.go:282] 0 containers: []
	W1227 20:11:44.526258  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:44.526264  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:44.526337  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:44.557553  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:44.557629  319301 cri.go:96] found id: ""
	I1227 20:11:44.557644  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:44.557713  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:44.561435  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:44.561578  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:44.588202  319301 cri.go:96] found id: ""
	I1227 20:11:44.588227  319301 logs.go:282] 0 containers: []
	W1227 20:11:44.588236  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:44.588250  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:44.588281  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:44.636647  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:44.636688  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:44.715003  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:44.715041  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:44.746461  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:44.746488  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:44.840354  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:44.840392  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:44.910107  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:44.902375    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.902947    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.904566    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.905162    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.906700    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:44.902375    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.902947    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.904566    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.905162    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.906700    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:44.910127  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:44.910139  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:44.958123  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:44.958155  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:44.988455  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:44.988486  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:45.017637  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:45.017669  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:45.068015  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:45.068047  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:47.639577  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:47.650807  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:47.650879  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:47.680709  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:47.680780  319301 cri.go:96] found id: ""
	I1227 20:11:47.680801  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:47.680886  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:47.684862  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:47.684933  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:47.711503  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:47.711527  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:47.711533  319301 cri.go:96] found id: ""
	I1227 20:11:47.711541  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:47.711597  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:47.715323  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:47.718860  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:47.718939  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:47.745091  319301 cri.go:96] found id: ""
	I1227 20:11:47.745118  319301 logs.go:282] 0 containers: []
	W1227 20:11:47.745128  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:47.745134  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:47.745190  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:47.774661  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:47.774683  319301 cri.go:96] found id: ""
	I1227 20:11:47.774691  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:47.774751  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:47.778781  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:47.778879  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:47.805242  319301 cri.go:96] found id: ""
	I1227 20:11:47.805268  319301 logs.go:282] 0 containers: []
	W1227 20:11:47.805278  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:47.805284  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:47.805350  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:47.833172  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:47.833240  319301 cri.go:96] found id: ""
	I1227 20:11:47.833262  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:47.833351  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:47.837087  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:47.837159  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:47.865275  319301 cri.go:96] found id: ""
	I1227 20:11:47.865353  319301 logs.go:282] 0 containers: []
	W1227 20:11:47.865380  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:47.865432  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:47.865505  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:47.944986  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:47.945022  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:47.980482  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:47.980511  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:47.999608  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:47.999639  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:48.076328  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:48.067348    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.068343    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.070039    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.070763    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.072273    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:48.067348    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.068343    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.070039    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.070763    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.072273    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:48.076352  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:48.076365  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:48.102940  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:48.102968  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:48.195452  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:48.195490  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:48.225373  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:48.225402  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:48.273525  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:48.273604  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:48.325768  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:48.325805  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:50.855952  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:50.867387  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:50.867456  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:50.897533  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:50.897556  319301 cri.go:96] found id: ""
	I1227 20:11:50.897565  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:50.897617  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:50.900982  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:50.901048  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:50.935428  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:50.935450  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:50.935455  319301 cri.go:96] found id: ""
	I1227 20:11:50.935468  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:50.935521  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:50.939266  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:50.943149  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:50.943266  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:50.974808  319301 cri.go:96] found id: ""
	I1227 20:11:50.974842  319301 logs.go:282] 0 containers: []
	W1227 20:11:50.974852  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:50.974859  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:50.974928  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:51.001867  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:51.001890  319301 cri.go:96] found id: ""
	I1227 20:11:51.001899  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:51.001957  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:51.005758  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:51.005831  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:51.035904  319301 cri.go:96] found id: ""
	I1227 20:11:51.035979  319301 logs.go:282] 0 containers: []
	W1227 20:11:51.036002  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:51.036026  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:51.036134  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:51.064190  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:51.064213  319301 cri.go:96] found id: ""
	I1227 20:11:51.064222  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:51.064277  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:51.068971  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:51.069043  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:51.098066  319301 cri.go:96] found id: ""
	I1227 20:11:51.098092  319301 logs.go:282] 0 containers: []
	W1227 20:11:51.098101  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:51.098116  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:51.098128  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:51.193690  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:51.193731  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:51.236544  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:51.236578  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:51.275361  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:51.275397  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:51.309801  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:51.309827  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:51.327683  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:51.327711  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:51.401236  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:51.392227    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.393287    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.394285    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.395538    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.396222    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:51.392227    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.393287    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.394285    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.395538    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.396222    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:51.401259  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:51.401273  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:51.429955  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:51.429985  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:51.492625  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:51.492662  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:51.518481  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:51.518512  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:54.100065  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:54.111435  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:54.111510  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:54.142927  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:54.142956  319301 cri.go:96] found id: ""
	I1227 20:11:54.142975  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:54.143064  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:54.147093  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:54.147233  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:54.173813  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:54.173832  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:54.173837  319301 cri.go:96] found id: ""
	I1227 20:11:54.173844  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:54.173903  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:54.177570  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:54.181008  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:54.181079  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:54.206624  319301 cri.go:96] found id: ""
	I1227 20:11:54.206648  319301 logs.go:282] 0 containers: []
	W1227 20:11:54.206658  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:54.206664  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:54.206720  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:54.232185  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:54.232208  319301 cri.go:96] found id: ""
	I1227 20:11:54.232218  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:54.232281  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:54.236968  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:54.237047  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:54.266150  319301 cri.go:96] found id: ""
	I1227 20:11:54.266172  319301 logs.go:282] 0 containers: []
	W1227 20:11:54.266181  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:54.266187  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:54.266254  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:54.294800  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:54.294820  319301 cri.go:96] found id: ""
	I1227 20:11:54.294829  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:54.294880  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:54.298462  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:54.298526  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:54.323550  319301 cri.go:96] found id: ""
	I1227 20:11:54.323573  319301 logs.go:282] 0 containers: []
	W1227 20:11:54.323582  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:54.323599  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:54.323610  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:54.352757  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:54.352783  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:54.383438  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:54.383464  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:54.473431  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:54.473470  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:54.544121  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:54.535951    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.536753    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.538081    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.538522    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.540194    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:54.535951    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.536753    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.538081    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.538522    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.540194    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:54.544146  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:54.544162  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:54.587199  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:54.587231  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:54.625648  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:54.625675  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:54.708479  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:54.708513  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:54.727026  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:54.727055  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:54.758081  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:54.758110  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:57.311000  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:57.321234  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:57.321311  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:57.349011  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:57.349030  319301 cri.go:96] found id: ""
	I1227 20:11:57.349038  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:57.349091  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:57.353198  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:57.353266  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:57.378464  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:57.378489  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:57.378494  319301 cri.go:96] found id: ""
	I1227 20:11:57.378502  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:57.378564  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:57.382492  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:57.385894  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:57.385975  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:57.410564  319301 cri.go:96] found id: ""
	I1227 20:11:57.410629  319301 logs.go:282] 0 containers: []
	W1227 20:11:57.410642  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:57.410650  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:57.410708  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:57.437790  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:57.437814  319301 cri.go:96] found id: ""
	I1227 20:11:57.437823  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:57.437881  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:57.441526  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:57.441645  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:57.467252  319301 cri.go:96] found id: ""
	I1227 20:11:57.467319  319301 logs.go:282] 0 containers: []
	W1227 20:11:57.467334  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:57.467342  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:57.467406  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:57.495037  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:57.495058  319301 cri.go:96] found id: ""
	I1227 20:11:57.495067  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:57.495123  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:57.498778  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:57.498878  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:57.528106  319301 cri.go:96] found id: ""
	I1227 20:11:57.528133  319301 logs.go:282] 0 containers: []
	W1227 20:11:57.528142  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:57.528155  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:57.528168  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:57.619388  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:57.619424  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:57.650304  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:57.650332  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:57.699631  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:57.699667  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:57.743221  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:57.743254  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:57.769136  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:57.769164  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:57.786763  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:57.786790  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:57.859691  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:57.849669    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.850063    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.853911    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.854484    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.856001    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:57.849669    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.850063    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.853911    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.854484    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.856001    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:57.859713  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:57.859728  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:57.884558  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:57.884586  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:57.961115  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:57.961152  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:00.497672  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:00.510050  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:00.510129  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:00.544933  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:00.544956  319301 cri.go:96] found id: ""
	I1227 20:12:00.544965  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:00.545025  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:00.549158  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:00.549233  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:00.576607  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:00.576630  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:00.576636  319301 cri.go:96] found id: ""
	I1227 20:12:00.576643  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:00.576700  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:00.580716  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:00.584708  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:00.584783  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:00.623469  319301 cri.go:96] found id: ""
	I1227 20:12:00.623492  319301 logs.go:282] 0 containers: []
	W1227 20:12:00.623501  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:00.623508  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:00.623567  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:00.650388  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:00.650460  319301 cri.go:96] found id: ""
	I1227 20:12:00.650476  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:00.650537  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:00.654531  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:00.654613  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:00.685179  319301 cri.go:96] found id: ""
	I1227 20:12:00.685206  319301 logs.go:282] 0 containers: []
	W1227 20:12:00.685215  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:00.685222  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:00.685283  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:00.716017  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:00.716036  319301 cri.go:96] found id: ""
	I1227 20:12:00.716045  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:00.716102  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:00.720897  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:00.720967  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:00.752084  319301 cri.go:96] found id: ""
	I1227 20:12:00.752108  319301 logs.go:282] 0 containers: []
	W1227 20:12:00.752118  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:00.752133  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:00.752145  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:00.779162  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:00.779191  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:00.828229  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:00.828268  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:00.854975  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:00.855005  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:00.883576  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:00.883606  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:00.965151  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:00.965192  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:01.067209  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:01.067248  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:01.085199  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:01.085232  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:01.155625  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:01.146876    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.148053    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.148721    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.149832    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.150397    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:01.146876    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.148053    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.148721    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.149832    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.150397    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:01.155647  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:01.155660  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:01.206940  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:01.206978  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:03.749679  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:03.760472  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:03.760548  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:03.788993  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:03.789016  319301 cri.go:96] found id: ""
	I1227 20:12:03.789024  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:03.789079  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:03.792725  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:03.792798  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:03.817942  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:03.817964  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:03.817969  319301 cri.go:96] found id: ""
	I1227 20:12:03.817975  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:03.818031  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:03.821717  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:03.825168  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:03.825254  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:03.851505  319301 cri.go:96] found id: ""
	I1227 20:12:03.851527  319301 logs.go:282] 0 containers: []
	W1227 20:12:03.851536  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:03.851542  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:03.851606  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:03.878946  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:03.878971  319301 cri.go:96] found id: ""
	I1227 20:12:03.878980  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:03.879043  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:03.883057  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:03.883130  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:03.911906  319301 cri.go:96] found id: ""
	I1227 20:12:03.911933  319301 logs.go:282] 0 containers: []
	W1227 20:12:03.911943  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:03.911950  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:03.912009  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:03.942160  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:03.942183  319301 cri.go:96] found id: ""
	I1227 20:12:03.942192  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:03.942252  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:03.946415  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:03.946666  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:03.979149  319301 cri.go:96] found id: ""
	I1227 20:12:03.979174  319301 logs.go:282] 0 containers: []
	W1227 20:12:03.979182  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:03.979198  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:03.979210  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:04.005778  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:04.005811  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:04.088126  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:04.088160  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:04.119438  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:04.119469  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:04.190373  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:04.181899    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.182747    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.184416    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.184965    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.186575    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:04.181899    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.182747    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.184416    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.184965    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.186575    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:04.190394  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:04.190407  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:04.220233  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:04.220259  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:04.245645  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:04.245671  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:04.345961  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:04.345994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:04.365659  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:04.365694  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:04.417757  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:04.417791  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:06.964717  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:06.979395  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:06.979502  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:07.006920  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:07.006954  319301 cri.go:96] found id: ""
	I1227 20:12:07.006964  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:07.007030  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:07.012095  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:07.012233  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:07.041413  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:07.041494  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:07.041512  319301 cri.go:96] found id: ""
	I1227 20:12:07.041520  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:07.041598  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:07.045354  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:07.049177  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:07.049259  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:07.083301  319301 cri.go:96] found id: ""
	I1227 20:12:07.083329  319301 logs.go:282] 0 containers: []
	W1227 20:12:07.083338  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:07.083344  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:07.083421  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:07.115313  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:07.115338  319301 cri.go:96] found id: ""
	I1227 20:12:07.115347  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:07.115417  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:07.119201  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:07.119288  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:07.146102  319301 cri.go:96] found id: ""
	I1227 20:12:07.146131  319301 logs.go:282] 0 containers: []
	W1227 20:12:07.146140  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:07.146147  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:07.146208  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:07.172141  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:07.172172  319301 cri.go:96] found id: ""
	I1227 20:12:07.172180  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:07.172247  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:07.175941  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:07.176014  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:07.201635  319301 cri.go:96] found id: ""
	I1227 20:12:07.201661  319301 logs.go:282] 0 containers: []
	W1227 20:12:07.201682  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:07.201699  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:07.201711  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:07.267041  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:07.258167    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.258717    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.260273    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.260745    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.262196    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:07.258167    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.258717    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.260273    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.260745    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.262196    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:07.267062  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:07.267076  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:07.299653  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:07.299681  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:07.379741  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:07.379776  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:07.478201  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:07.478238  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:07.496143  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:07.496172  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:07.524943  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:07.524973  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:07.588841  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:07.588883  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:07.639348  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:07.639391  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:07.671575  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:07.671608  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:10.217505  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:10.228493  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:10.228562  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:10.262225  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:10.262248  319301 cri.go:96] found id: ""
	I1227 20:12:10.262256  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:10.262312  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:10.267062  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:10.267197  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:10.296434  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:10.296459  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:10.296464  319301 cri.go:96] found id: ""
	I1227 20:12:10.296472  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:10.296529  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:10.300310  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:10.304957  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:10.305022  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:10.330532  319301 cri.go:96] found id: ""
	I1227 20:12:10.330560  319301 logs.go:282] 0 containers: []
	W1227 20:12:10.330570  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:10.330584  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:10.330646  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:10.361300  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:10.361324  319301 cri.go:96] found id: ""
	I1227 20:12:10.361332  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:10.361394  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:10.365025  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:10.365095  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:10.391129  319301 cri.go:96] found id: ""
	I1227 20:12:10.391150  319301 logs.go:282] 0 containers: []
	W1227 20:12:10.391159  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:10.391165  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:10.391228  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:10.427446  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:10.427467  319301 cri.go:96] found id: ""
	I1227 20:12:10.427475  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:10.427530  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:10.431147  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:10.431236  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:10.457621  319301 cri.go:96] found id: ""
	I1227 20:12:10.457645  319301 logs.go:282] 0 containers: []
	W1227 20:12:10.457653  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:10.457669  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:10.457680  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:10.497801  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:10.497832  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:10.533576  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:10.533606  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:10.563063  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:10.563092  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:10.595636  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:10.595663  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:10.707654  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:10.707734  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:10.727626  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:10.727752  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:10.859705  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:10.846588    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.847467    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.853805    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.854122    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.855621    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:10.846588    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.847467    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.853805    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.854122    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.855621    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:10.859774  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:10.859801  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:10.958101  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:10.958183  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:11.020263  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:11.020358  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:13.639948  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:13.650732  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:13.650797  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:13.676632  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:13.676651  319301 cri.go:96] found id: ""
	I1227 20:12:13.676658  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:13.676710  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.680432  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:13.680542  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:13.711606  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:13.711625  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:13.711630  319301 cri.go:96] found id: ""
	I1227 20:12:13.711637  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:13.711691  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.715265  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.718775  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:13.718931  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:13.746245  319301 cri.go:96] found id: ""
	I1227 20:12:13.746275  319301 logs.go:282] 0 containers: []
	W1227 20:12:13.746291  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:13.746298  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:13.746374  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:13.779388  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:13.779409  319301 cri.go:96] found id: ""
	I1227 20:12:13.779418  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:13.779504  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.783612  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:13.783685  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:13.808842  319301 cri.go:96] found id: ""
	I1227 20:12:13.808863  319301 logs.go:282] 0 containers: []
	W1227 20:12:13.808872  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:13.808878  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:13.808934  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:13.835153  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:13.835174  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:13.835179  319301 cri.go:96] found id: ""
	I1227 20:12:13.835187  319301 logs.go:282] 2 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:13.835249  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.839009  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.842805  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:13.842881  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:13.872544  319301 cri.go:96] found id: ""
	I1227 20:12:13.872570  319301 logs.go:282] 0 containers: []
	W1227 20:12:13.872579  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:13.872587  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:13.872599  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:13.898550  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:13.898578  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:13.924170  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:13.924197  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:14.003535  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:14.003571  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:14.105189  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:14.105228  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:14.176586  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:14.168398    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.169127    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.170691    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.171292    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.172935    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:14.168398    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.169127    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.170691    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.171292    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.172935    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:14.176608  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:14.176622  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:14.204979  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:14.205007  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:14.246862  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:14.246911  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:14.282199  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:14.282225  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:14.315428  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:14.315459  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:14.334814  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:14.334848  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:16.885569  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:16.896097  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:16.896162  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:16.925765  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:16.925785  319301 cri.go:96] found id: ""
	I1227 20:12:16.925794  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:16.925849  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:16.929283  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:16.929349  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:16.954491  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:16.954515  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:16.954520  319301 cri.go:96] found id: ""
	I1227 20:12:16.954528  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:16.954586  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:16.958221  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:16.961382  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:16.961573  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:16.994836  319301 cri.go:96] found id: ""
	I1227 20:12:16.994860  319301 logs.go:282] 0 containers: []
	W1227 20:12:16.994868  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:16.994874  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:16.994933  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:17.021903  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:17.021926  319301 cri.go:96] found id: ""
	I1227 20:12:17.021934  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:17.022017  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:17.025998  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:17.026093  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:17.052024  319301 cri.go:96] found id: ""
	I1227 20:12:17.052049  319301 logs.go:282] 0 containers: []
	W1227 20:12:17.052058  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:17.052083  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:17.052163  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:17.078719  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:17.078740  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:17.078744  319301 cri.go:96] found id: ""
	I1227 20:12:17.078752  319301 logs.go:282] 2 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:17.078826  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:17.082470  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:17.086147  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:17.086220  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:17.116980  319301 cri.go:96] found id: ""
	I1227 20:12:17.117003  319301 logs.go:282] 0 containers: []
	W1227 20:12:17.117013  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:17.117022  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:17.117033  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:17.196379  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:17.196418  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:17.230926  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:17.230959  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:17.250661  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:17.250691  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:17.322817  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:17.314780    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.315442    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.317018    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.317535    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.319106    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:17.314780    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.315442    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.317018    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.317535    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.319106    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:17.322840  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:17.322856  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:17.351684  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:17.351711  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:17.399098  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:17.399132  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:17.490988  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:17.491023  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:17.556151  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:17.556187  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:17.582835  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:17.582871  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:17.613801  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:17.613837  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:20.145063  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:20.156515  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:20.156583  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:20.187608  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:20.187635  319301 cri.go:96] found id: ""
	I1227 20:12:20.187645  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:20.187707  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.192025  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:20.192105  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:20.224749  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:20.224774  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:20.224780  319301 cri.go:96] found id: ""
	I1227 20:12:20.224788  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:20.224847  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.229081  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.233080  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:20.233183  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:20.265194  319301 cri.go:96] found id: ""
	I1227 20:12:20.265217  319301 logs.go:282] 0 containers: []
	W1227 20:12:20.265226  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:20.265233  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:20.265290  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:20.294941  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:20.294965  319301 cri.go:96] found id: ""
	I1227 20:12:20.294974  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:20.295030  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.299194  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:20.299295  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:20.327103  319301 cri.go:96] found id: ""
	I1227 20:12:20.327127  319301 logs.go:282] 0 containers: []
	W1227 20:12:20.327136  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:20.327142  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:20.327225  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:20.355319  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:20.355340  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:20.355351  319301 cri.go:96] found id: ""
	I1227 20:12:20.355359  319301 logs.go:282] 2 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:20.355441  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.359302  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.362848  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:20.362949  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:20.393433  319301 cri.go:96] found id: ""
	I1227 20:12:20.393488  319301 logs.go:282] 0 containers: []
	W1227 20:12:20.393498  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:20.393527  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:20.393545  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:20.421493  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:20.421522  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:20.498925  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:20.498966  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:20.519854  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:20.519883  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:20.576881  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:20.576922  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:20.621620  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:20.621656  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:20.649613  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:20.649648  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:20.685860  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:20.685889  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:20.779036  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:20.779072  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:20.846477  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:20.838325    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.838829    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.840489    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.841069    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.842962    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:20.838325    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.838829    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.840489    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.841069    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.842962    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:20.846497  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:20.846511  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:20.876493  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:20.876523  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:23.407116  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:23.417842  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:23.417914  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:23.449077  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:23.449100  319301 cri.go:96] found id: ""
	I1227 20:12:23.449108  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:23.449162  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:23.452848  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:23.452918  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:23.481566  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:23.481589  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:23.481595  319301 cri.go:96] found id: ""
	I1227 20:12:23.481602  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:23.481661  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:23.485561  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:23.489363  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:23.489433  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:23.515690  319301 cri.go:96] found id: ""
	I1227 20:12:23.515717  319301 logs.go:282] 0 containers: []
	W1227 20:12:23.515727  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:23.515734  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:23.515796  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:23.542113  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:23.542134  319301 cri.go:96] found id: ""
	I1227 20:12:23.542144  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:23.542198  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:23.546461  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:23.546535  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:23.572051  319301 cri.go:96] found id: ""
	I1227 20:12:23.572080  319301 logs.go:282] 0 containers: []
	W1227 20:12:23.572090  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:23.572096  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:23.572154  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:23.598223  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:23.598246  319301 cri.go:96] found id: ""
	I1227 20:12:23.598254  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:23.598308  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:23.602471  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:23.602548  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:23.632139  319301 cri.go:96] found id: ""
	I1227 20:12:23.632162  319301 logs.go:282] 0 containers: []
	W1227 20:12:23.632171  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:23.632185  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:23.632198  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:23.728534  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:23.728573  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:23.746910  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:23.746937  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:23.790408  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:23.790450  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:23.816648  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:23.816683  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:23.844206  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:23.844234  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:23.922341  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:23.922381  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:23.990219  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:23.981959    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.982768    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.984359    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.984673    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.986151    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:23.981959    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.982768    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.984359    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.984673    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.986151    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:23.990238  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:23.990252  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:24.021769  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:24.021804  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:24.077552  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:24.077591  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:26.612708  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:26.623326  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:26.623428  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:26.653266  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:26.653289  319301 cri.go:96] found id: ""
	I1227 20:12:26.653298  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:26.653373  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:26.657260  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:26.657353  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:26.683071  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:26.683092  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:26.683098  319301 cri.go:96] found id: ""
	I1227 20:12:26.683105  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:26.683166  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:26.686901  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:26.690560  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:26.690649  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:26.718862  319301 cri.go:96] found id: ""
	I1227 20:12:26.718885  319301 logs.go:282] 0 containers: []
	W1227 20:12:26.718894  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:26.718900  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:26.718959  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:26.747552  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:26.747574  319301 cri.go:96] found id: ""
	I1227 20:12:26.747582  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:26.747637  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:26.751375  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:26.751452  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:26.777853  319301 cri.go:96] found id: ""
	I1227 20:12:26.777880  319301 logs.go:282] 0 containers: []
	W1227 20:12:26.777889  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:26.777895  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:26.777957  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:26.804445  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:26.804468  319301 cri.go:96] found id: ""
	I1227 20:12:26.804477  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:26.804535  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:26.808568  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:26.808691  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:26.836896  319301 cri.go:96] found id: ""
	I1227 20:12:26.836922  319301 logs.go:282] 0 containers: []
	W1227 20:12:26.836932  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:26.836945  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:26.836960  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:26.857005  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:26.857033  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:26.928707  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:26.920823    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.921472    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.923023    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.923492    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.925222    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:26.920823    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.921472    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.923023    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.923492    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.925222    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:26.928729  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:26.928742  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:26.956493  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:26.956522  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:26.986280  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:26.986306  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:27.076259  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:27.076295  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:27.172547  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:27.172582  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:27.230338  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:27.230374  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:27.276521  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:27.276554  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:27.308603  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:27.308630  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:29.841840  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:29.852151  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:29.852219  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:29.879885  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:29.879922  319301 cri.go:96] found id: ""
	I1227 20:12:29.879931  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:29.880028  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:29.883662  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:29.883731  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:29.912705  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:29.912727  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:29.912733  319301 cri.go:96] found id: ""
	I1227 20:12:29.912740  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:29.912795  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:29.916252  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:29.921161  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:29.921231  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:29.950824  319301 cri.go:96] found id: ""
	I1227 20:12:29.950846  319301 logs.go:282] 0 containers: []
	W1227 20:12:29.950855  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:29.950862  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:29.950917  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:29.986337  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:29.986357  319301 cri.go:96] found id: ""
	I1227 20:12:29.986365  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:29.986420  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:29.990557  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:29.990644  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:30.034984  319301 cri.go:96] found id: ""
	I1227 20:12:30.035016  319301 logs.go:282] 0 containers: []
	W1227 20:12:30.035027  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:30.035034  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:30.035109  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:30.071248  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:30.071274  319301 cri.go:96] found id: ""
	I1227 20:12:30.071284  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:30.071380  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:30.075947  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:30.076061  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:30.105680  319301 cri.go:96] found id: ""
	I1227 20:12:30.105705  319301 logs.go:282] 0 containers: []
	W1227 20:12:30.105715  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:30.105730  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:30.105748  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:30.135961  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:30.135994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:30.216289  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:30.216331  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:30.255913  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:30.255946  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:30.355835  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:30.355870  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:30.429441  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:30.421794    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.422353    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.423860    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.424337    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.426060    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:30.421794    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.422353    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.423860    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.424337    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.426060    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:30.429483  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:30.429495  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:30.458949  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:30.458978  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:30.502640  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:30.502677  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:30.532992  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:30.533023  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:30.557835  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:30.557866  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:33.116429  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:33.127018  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:33.127132  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:33.153291  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:33.153316  319301 cri.go:96] found id: ""
	I1227 20:12:33.153324  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:33.153379  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:33.157166  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:33.157239  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:33.183179  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:33.183200  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:33.183205  319301 cri.go:96] found id: ""
	I1227 20:12:33.183213  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:33.183265  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:33.186752  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:33.190422  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:33.190494  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:33.220717  319301 cri.go:96] found id: ""
	I1227 20:12:33.220739  319301 logs.go:282] 0 containers: []
	W1227 20:12:33.220748  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:33.220754  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:33.220818  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:33.251060  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:33.251083  319301 cri.go:96] found id: ""
	I1227 20:12:33.251091  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:33.251145  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:33.254679  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:33.254748  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:33.286493  319301 cri.go:96] found id: ""
	I1227 20:12:33.286518  319301 logs.go:282] 0 containers: []
	W1227 20:12:33.286527  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:33.286533  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:33.286620  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:33.313587  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:33.313613  319301 cri.go:96] found id: ""
	I1227 20:12:33.313622  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:33.313680  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:33.317328  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:33.317408  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:33.343846  319301 cri.go:96] found id: ""
	I1227 20:12:33.343871  319301 logs.go:282] 0 containers: []
	W1227 20:12:33.343880  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:33.343893  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:33.343925  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:33.438565  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:33.438603  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:33.457675  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:33.457705  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:33.525788  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:33.517888    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.518628    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.520164    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.520718    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.522282    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:33.517888    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.518628    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.520164    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.520718    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.522282    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:33.525811  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:33.525825  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:33.552529  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:33.552556  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:33.580140  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:33.580172  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:33.641393  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:33.641499  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:33.693161  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:33.693199  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:33.724867  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:33.724893  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:33.805497  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:33.805537  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:36.337435  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:36.352136  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:36.352206  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:36.378464  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:36.378486  319301 cri.go:96] found id: ""
	I1227 20:12:36.378494  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:36.378548  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:36.382431  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:36.382500  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:36.408340  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:36.408362  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:36.408367  319301 cri.go:96] found id: ""
	I1227 20:12:36.408375  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:36.408430  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:36.411977  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:36.415450  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:36.415561  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:36.441750  319301 cri.go:96] found id: ""
	I1227 20:12:36.441773  319301 logs.go:282] 0 containers: []
	W1227 20:12:36.441781  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:36.441789  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:36.441849  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:36.469111  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:36.469133  319301 cri.go:96] found id: ""
	I1227 20:12:36.469141  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:36.469193  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:36.472982  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:36.473055  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:36.501345  319301 cri.go:96] found id: ""
	I1227 20:12:36.501368  319301 logs.go:282] 0 containers: []
	W1227 20:12:36.501378  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:36.501384  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:36.501477  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:36.527577  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:36.527600  319301 cri.go:96] found id: ""
	I1227 20:12:36.527608  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:36.527664  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:36.531477  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:36.531552  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:36.561054  319301 cri.go:96] found id: ""
	I1227 20:12:36.561130  319301 logs.go:282] 0 containers: []
	W1227 20:12:36.561154  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:36.561181  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:36.561217  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:36.589983  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:36.590014  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:36.669955  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:36.669994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:36.768958  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:36.768994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:36.787310  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:36.787336  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:36.856793  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:36.848163    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.849099    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.850911    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.851491    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.853132    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:36.848163    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.849099    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.850911    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.851491    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.853132    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:36.856819  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:36.856834  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:36.909328  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:36.909366  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:36.960708  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:36.960741  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:36.988799  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:36.988826  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:37.020389  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:37.020426  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:39.556036  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:39.567454  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:39.567523  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:39.597767  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:39.597789  319301 cri.go:96] found id: ""
	I1227 20:12:39.597797  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:39.597853  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:39.601347  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:39.601417  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:39.630309  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:39.630330  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:39.630335  319301 cri.go:96] found id: ""
	I1227 20:12:39.630343  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:39.630395  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:39.634109  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:39.637369  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:39.637474  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:39.664492  319301 cri.go:96] found id: ""
	I1227 20:12:39.664515  319301 logs.go:282] 0 containers: []
	W1227 20:12:39.664523  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:39.664536  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:39.664595  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:39.689554  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:39.689585  319301 cri.go:96] found id: ""
	I1227 20:12:39.689594  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:39.689648  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:39.693184  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:39.693251  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:39.719030  319301 cri.go:96] found id: ""
	I1227 20:12:39.719057  319301 logs.go:282] 0 containers: []
	W1227 20:12:39.719066  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:39.719073  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:39.719131  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:39.751945  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:39.751967  319301 cri.go:96] found id: ""
	I1227 20:12:39.751976  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:39.752058  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:39.755910  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:39.755984  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:39.787281  319301 cri.go:96] found id: ""
	I1227 20:12:39.787306  319301 logs.go:282] 0 containers: []
	W1227 20:12:39.787315  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:39.787329  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:39.787341  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:39.818112  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:39.818181  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:39.877195  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:39.877228  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:39.902875  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:39.902908  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:39.933383  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:39.933411  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:39.964696  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:39.964725  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:40.094427  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:40.094546  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:40.115127  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:40.115169  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:40.188369  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:40.178140    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.178935    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.180929    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.181956    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.182727    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:40.178140    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.178935    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.180929    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.181956    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.182727    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:40.188403  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:40.188417  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:40.248250  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:40.248293  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:42.832956  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:42.843630  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:42.843716  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:42.880632  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:42.880654  319301 cri.go:96] found id: ""
	I1227 20:12:42.880662  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:42.880716  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:42.884197  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:42.884283  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:42.912329  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:42.912351  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:42.912356  319301 cri.go:96] found id: ""
	I1227 20:12:42.912363  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:42.912420  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:42.919733  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:42.924460  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:42.924555  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:42.950089  319301 cri.go:96] found id: ""
	I1227 20:12:42.950112  319301 logs.go:282] 0 containers: []
	W1227 20:12:42.950120  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:42.950126  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:42.950186  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:42.982372  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:42.982393  319301 cri.go:96] found id: ""
	I1227 20:12:42.982400  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:42.982454  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:42.985981  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:42.986048  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:43.025247  319301 cri.go:96] found id: ""
	I1227 20:12:43.025270  319301 logs.go:282] 0 containers: []
	W1227 20:12:43.025279  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:43.025285  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:43.025345  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:43.051039  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:43.051058  319301 cri.go:96] found id: ""
	I1227 20:12:43.051066  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:43.051128  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:43.055686  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:43.055774  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:43.080239  319301 cri.go:96] found id: ""
	I1227 20:12:43.080305  319301 logs.go:282] 0 containers: []
	W1227 20:12:43.080328  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:43.080365  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:43.080392  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:43.117618  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:43.117647  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:43.203203  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:43.203243  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:43.233482  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:43.233514  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:43.331030  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:43.331068  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:43.400596  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:43.391562    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.392218    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.393995    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.395389    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.396936    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:43.391562    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.392218    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.393995    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.395389    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.396936    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:43.400620  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:43.400635  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:43.451280  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:43.451316  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:43.469068  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:43.469097  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:43.497581  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:43.497607  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:43.541271  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:43.541307  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:46.066721  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:46.077342  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:46.077418  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:46.106073  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:46.106096  319301 cri.go:96] found id: ""
	I1227 20:12:46.106105  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:46.106161  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:46.110573  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:46.110647  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:46.141403  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:46.141426  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:46.141431  319301 cri.go:96] found id: ""
	I1227 20:12:46.141438  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:46.141524  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:46.146711  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:46.150119  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:46.150207  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:46.177378  319301 cri.go:96] found id: ""
	I1227 20:12:46.177403  319301 logs.go:282] 0 containers: []
	W1227 20:12:46.177411  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:46.177418  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:46.177523  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:46.203465  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:46.203488  319301 cri.go:96] found id: ""
	I1227 20:12:46.203497  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:46.203554  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:46.207163  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:46.207260  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:46.232721  319301 cri.go:96] found id: ""
	I1227 20:12:46.232748  319301 logs.go:282] 0 containers: []
	W1227 20:12:46.232757  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:46.232764  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:46.232849  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:46.260899  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:46.260924  319301 cri.go:96] found id: ""
	I1227 20:12:46.260933  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:46.261004  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:46.264880  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:46.264994  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:46.294702  319301 cri.go:96] found id: ""
	I1227 20:12:46.294772  319301 logs.go:282] 0 containers: []
	W1227 20:12:46.294788  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:46.294802  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:46.294815  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:46.392870  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:46.392907  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:46.411136  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:46.411165  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:46.442076  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:46.442105  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:46.507864  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:46.500419    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.500963    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.502621    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.503081    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.504499    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:46.500419    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.500963    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.502621    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.503081    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.504499    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:46.507887  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:46.507900  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:46.534504  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:46.534534  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:46.599046  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:46.599082  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:46.644197  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:46.644234  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:46.674716  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:46.674743  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:46.703463  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:46.703492  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:49.285570  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:49.295868  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:49.295960  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:49.323445  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:49.323469  319301 cri.go:96] found id: ""
	I1227 20:12:49.323477  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:49.323567  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:49.327039  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:49.327106  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:49.353757  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:49.353781  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:49.353787  319301 cri.go:96] found id: ""
	I1227 20:12:49.353794  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:49.353854  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:49.360531  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:49.364480  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:49.364568  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:49.392254  319301 cri.go:96] found id: ""
	I1227 20:12:49.392325  319301 logs.go:282] 0 containers: []
	W1227 20:12:49.392349  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:49.392374  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:49.392458  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:49.422197  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:49.422218  319301 cri.go:96] found id: ""
	I1227 20:12:49.422226  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:49.422279  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:49.425742  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:49.425813  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:49.451624  319301 cri.go:96] found id: ""
	I1227 20:12:49.451650  319301 logs.go:282] 0 containers: []
	W1227 20:12:49.451659  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:49.451665  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:49.451725  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:49.477813  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:49.477836  319301 cri.go:96] found id: ""
	I1227 20:12:49.477846  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:49.477911  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:49.481531  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:49.481625  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:49.507374  319301 cri.go:96] found id: ""
	I1227 20:12:49.507400  319301 logs.go:282] 0 containers: []
	W1227 20:12:49.507409  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:49.507425  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:49.507438  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:49.598294  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:49.598336  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:49.636279  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:49.636307  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:49.707651  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:49.707686  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:49.765937  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:49.765972  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:49.783282  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:49.783310  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:49.868264  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:49.856321    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.857001    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.858772    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.863251    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.863608    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:49.856321    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.857001    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.858772    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.863251    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.863608    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:49.868294  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:49.868307  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:49.894496  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:49.894524  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:49.919827  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:49.919864  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:50.000367  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:50.000443  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:52.556360  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:52.566511  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:52.566580  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:52.593484  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:52.593517  319301 cri.go:96] found id: ""
	I1227 20:12:52.593527  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:52.593640  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:52.597279  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:52.597349  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:52.623469  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:52.623547  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:52.623568  319301 cri.go:96] found id: ""
	I1227 20:12:52.623591  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:52.623659  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:52.627305  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:52.630834  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:52.630949  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:52.657093  319301 cri.go:96] found id: ""
	I1227 20:12:52.657120  319301 logs.go:282] 0 containers: []
	W1227 20:12:52.657130  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:52.657136  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:52.657201  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:52.683396  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:52.683470  319301 cri.go:96] found id: ""
	I1227 20:12:52.683487  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:52.683556  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:52.687311  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:52.687381  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:52.716233  319301 cri.go:96] found id: ""
	I1227 20:12:52.716257  319301 logs.go:282] 0 containers: []
	W1227 20:12:52.716266  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:52.716273  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:52.716333  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:52.742458  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:52.742482  319301 cri.go:96] found id: ""
	I1227 20:12:52.742491  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:52.742547  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:52.746498  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:52.746629  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:52.771746  319301 cri.go:96] found id: ""
	I1227 20:12:52.771772  319301 logs.go:282] 0 containers: []
	W1227 20:12:52.771781  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:52.771820  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:52.771837  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:52.824894  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:52.824929  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:52.854289  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:52.854318  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:52.889855  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:52.889887  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:52.993260  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:52.993294  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:53.038574  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:53.038617  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:53.071005  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:53.071035  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:53.149881  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:53.149919  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:53.167391  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:53.167547  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:53.240789  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:53.230138    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.230860    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.232667    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.233277    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.236557    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:53.230138    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.230860    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.232667    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.233277    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.236557    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:53.240810  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:53.240823  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:55.779743  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:55.790606  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:55.790677  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:55.817091  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:55.817112  319301 cri.go:96] found id: ""
	I1227 20:12:55.817121  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:55.817176  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:55.820799  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:55.820876  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:55.850874  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:55.850897  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:55.850903  319301 cri.go:96] found id: ""
	I1227 20:12:55.850911  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:55.850964  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:55.854708  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:55.858278  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:55.858347  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:55.887432  319301 cri.go:96] found id: ""
	I1227 20:12:55.887456  319301 logs.go:282] 0 containers: []
	W1227 20:12:55.887465  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:55.887471  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:55.887526  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:55.914817  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:55.914839  319301 cri.go:96] found id: ""
	I1227 20:12:55.914847  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:55.914903  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:55.918494  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:55.918571  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:55.948625  319301 cri.go:96] found id: ""
	I1227 20:12:55.948648  319301 logs.go:282] 0 containers: []
	W1227 20:12:55.948657  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:55.948664  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:55.948733  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:55.984844  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:55.984867  319301 cri.go:96] found id: ""
	I1227 20:12:55.984875  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:55.984930  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:55.988564  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:55.988652  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:56.016926  319301 cri.go:96] found id: ""
	I1227 20:12:56.016956  319301 logs.go:282] 0 containers: []
	W1227 20:12:56.016966  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:56.016982  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:56.016994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:56.118289  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:56.118325  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:56.136502  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:56.136532  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:56.169081  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:56.169108  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:56.211041  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:56.211076  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:56.243209  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:56.243244  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:56.314060  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:56.305651    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.306321    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.307810    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.308362    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.310021    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:56.305651    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.306321    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.307810    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.308362    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.310021    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:56.314082  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:56.314098  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:56.377302  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:56.377341  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:56.410912  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:56.410991  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:56.438190  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:56.438218  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:59.018860  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:59.029806  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:59.029879  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:59.058607  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:59.058631  319301 cri.go:96] found id: ""
	I1227 20:12:59.058640  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:59.058697  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:59.062467  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:59.062544  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:59.091353  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:59.091376  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:59.091382  319301 cri.go:96] found id: ""
	I1227 20:12:59.091389  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:59.091445  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:59.095198  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:59.100058  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:59.100137  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:59.126292  319301 cri.go:96] found id: ""
	I1227 20:12:59.126317  319301 logs.go:282] 0 containers: []
	W1227 20:12:59.126326  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:59.126333  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:59.126397  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:59.155155  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:59.155177  319301 cri.go:96] found id: ""
	I1227 20:12:59.155186  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:59.155242  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:59.158920  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:59.158992  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:59.189092  319301 cri.go:96] found id: ""
	I1227 20:12:59.189159  319301 logs.go:282] 0 containers: []
	W1227 20:12:59.189181  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:59.189206  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:59.189294  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:59.216198  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:59.216262  319301 cri.go:96] found id: ""
	I1227 20:12:59.216285  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:59.216377  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:59.224385  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:59.224486  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:59.252259  319301 cri.go:96] found id: ""
	I1227 20:12:59.252285  319301 logs.go:282] 0 containers: []
	W1227 20:12:59.252294  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:59.252309  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:59.252342  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:59.273005  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:59.273034  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:59.301850  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:59.301881  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:59.356187  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:59.356221  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:59.399819  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:59.399852  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:59.433910  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:59.433941  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:59.513398  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:59.513432  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:59.549380  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:59.549409  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:59.623298  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:59.615506    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.615904    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.617387    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.618024    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.619495    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:59.615506    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.615904    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.617387    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.618024    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.619495    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:59.623322  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:59.623336  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:59.649178  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:59.649207  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:02.243275  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:02.254105  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:02.254177  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:02.286583  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:02.286605  319301 cri.go:96] found id: ""
	I1227 20:13:02.286613  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:02.286669  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:02.290640  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:02.290708  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:02.317723  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:02.317746  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:02.317752  319301 cri.go:96] found id: ""
	I1227 20:13:02.317760  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:02.317817  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:02.322227  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:02.325742  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:02.325814  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:02.352306  319301 cri.go:96] found id: ""
	I1227 20:13:02.352333  319301 logs.go:282] 0 containers: []
	W1227 20:13:02.352342  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:02.352349  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:02.352409  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:02.378873  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:02.378896  319301 cri.go:96] found id: ""
	I1227 20:13:02.378906  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:02.378961  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:02.383556  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:02.383681  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:02.421495  319301 cri.go:96] found id: ""
	I1227 20:13:02.421526  319301 logs.go:282] 0 containers: []
	W1227 20:13:02.421550  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:02.421579  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:02.421661  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:02.454963  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:02.454985  319301 cri.go:96] found id: ""
	I1227 20:13:02.454994  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:02.455071  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:02.458781  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:02.458901  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:02.488822  319301 cri.go:96] found id: ""
	I1227 20:13:02.488848  319301 logs.go:282] 0 containers: []
	W1227 20:13:02.488857  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:02.488872  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:02.488904  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:02.513914  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:02.513945  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:02.543786  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:02.543815  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:02.602843  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:02.602877  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:02.634221  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:02.634257  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:02.736305  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:02.736347  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:02.812827  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:02.803912    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.804866    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.806654    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.807254    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.808858    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:02.803912    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.804866    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.806654    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.807254    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.808858    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:02.812848  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:02.812861  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:02.870730  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:02.870770  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:02.896826  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:02.896857  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:02.928575  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:02.928604  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:05.512539  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:05.522703  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:05.522777  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:05.549167  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:05.549187  319301 cri.go:96] found id: ""
	I1227 20:13:05.549195  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:05.549252  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:05.553114  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:05.553224  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:05.591305  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:05.591329  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:05.591334  319301 cri.go:96] found id: ""
	I1227 20:13:05.591342  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:05.591399  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:05.595292  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:05.598966  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:05.599090  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:05.626541  319301 cri.go:96] found id: ""
	I1227 20:13:05.626567  319301 logs.go:282] 0 containers: []
	W1227 20:13:05.626576  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:05.626583  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:05.626644  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:05.658675  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:05.658707  319301 cri.go:96] found id: ""
	I1227 20:13:05.658715  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:05.658771  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:05.662500  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:05.662571  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:05.694208  319301 cri.go:96] found id: ""
	I1227 20:13:05.694232  319301 logs.go:282] 0 containers: []
	W1227 20:13:05.694241  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:05.694248  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:05.694310  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:05.721109  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:05.721133  319301 cri.go:96] found id: ""
	I1227 20:13:05.721152  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:05.721212  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:05.724940  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:05.725010  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:05.751566  319301 cri.go:96] found id: ""
	I1227 20:13:05.751594  319301 logs.go:282] 0 containers: []
	W1227 20:13:05.751604  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:05.751643  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:05.751660  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:05.849663  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:05.849750  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:05.868576  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:05.868607  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:05.934428  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:05.925753    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.926400    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.928037    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.928648    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.930245    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:05.925753    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.926400    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.928037    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.928648    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.930245    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:05.934452  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:05.934466  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:05.965352  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:05.965378  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:06.020452  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:06.020494  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:06.054720  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:06.054750  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:06.084316  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:06.084346  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:06.166870  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:06.166934  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:06.221058  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:06.221095  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:08.753099  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:08.764525  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:08.764592  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:08.790692  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:08.790714  319301 cri.go:96] found id: ""
	I1227 20:13:08.790725  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:08.790781  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:08.794565  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:08.794679  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:08.820711  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:08.820730  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:08.820734  319301 cri.go:96] found id: ""
	I1227 20:13:08.820741  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:08.820797  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:08.824460  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:08.827902  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:08.827991  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:08.869147  319301 cri.go:96] found id: ""
	I1227 20:13:08.869171  319301 logs.go:282] 0 containers: []
	W1227 20:13:08.869184  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:08.869190  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:08.869273  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:08.897503  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:08.897528  319301 cri.go:96] found id: ""
	I1227 20:13:08.897545  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:08.897605  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:08.902138  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:08.902257  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:08.931144  319301 cri.go:96] found id: ""
	I1227 20:13:08.931168  319301 logs.go:282] 0 containers: []
	W1227 20:13:08.931177  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:08.931183  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:08.931240  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:08.958779  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:08.958802  319301 cri.go:96] found id: ""
	I1227 20:13:08.958810  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:08.958892  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:08.962888  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:08.962966  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:08.991222  319301 cri.go:96] found id: ""
	I1227 20:13:08.991248  319301 logs.go:282] 0 containers: []
	W1227 20:13:08.991257  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:08.991270  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:08.991310  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:09.009225  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:09.009256  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:09.081569  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:09.073722    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.074157    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.075724    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.076257    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.078038    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:09.073722    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.074157    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.075724    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.076257    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.078038    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:09.081592  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:09.081608  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:09.112754  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:09.112780  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:09.163779  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:09.163815  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:09.189441  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:09.189512  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:09.271488  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:09.271569  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:09.314936  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:09.314962  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:09.413305  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:09.413344  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:09.465609  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:09.465639  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:12.002552  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:12.014182  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:12.014264  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:12.052377  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:12.052400  319301 cri.go:96] found id: ""
	I1227 20:13:12.052409  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:12.052466  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:12.056292  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:12.056394  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:12.085743  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:12.085765  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:12.085770  319301 cri.go:96] found id: ""
	I1227 20:13:12.085778  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:12.085835  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:12.089812  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:12.093801  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:12.093896  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:12.122289  319301 cri.go:96] found id: ""
	I1227 20:13:12.122359  319301 logs.go:282] 0 containers: []
	W1227 20:13:12.122386  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:12.122402  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:12.122476  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:12.149731  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:12.149758  319301 cri.go:96] found id: ""
	I1227 20:13:12.149767  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:12.149823  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:12.153602  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:12.153688  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:12.178711  319301 cri.go:96] found id: ""
	I1227 20:13:12.178786  319301 logs.go:282] 0 containers: []
	W1227 20:13:12.178808  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:12.178832  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:12.178917  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:12.205322  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:12.205350  319301 cri.go:96] found id: ""
	I1227 20:13:12.205360  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:12.205414  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:12.209024  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:12.209091  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:12.234488  319301 cri.go:96] found id: ""
	I1227 20:13:12.234557  319301 logs.go:282] 0 containers: []
	W1227 20:13:12.234582  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:12.234609  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:12.234640  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:12.261610  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:12.261639  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:12.315635  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:12.315673  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:12.376280  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:12.376313  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:12.402133  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:12.402165  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:12.430982  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:12.431051  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:12.512045  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:12.512078  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:12.530685  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:12.530716  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:12.568375  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:12.568405  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:12.668785  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:12.668822  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:12.735523  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:12.727415    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.728180    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.729943    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.730267    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.732211    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:12.727415    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.728180    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.729943    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.730267    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.732211    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:15.236014  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:15.247391  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:15.247466  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:15.277268  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:15.277342  319301 cri.go:96] found id: ""
	I1227 20:13:15.277365  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:15.277488  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:15.282305  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:15.282373  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:15.312415  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:15.312436  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:15.312441  319301 cri.go:96] found id: ""
	I1227 20:13:15.312449  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:15.312503  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:15.316541  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:15.319901  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:15.319970  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:15.346399  319301 cri.go:96] found id: ""
	I1227 20:13:15.346424  319301 logs.go:282] 0 containers: []
	W1227 20:13:15.346432  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:15.346439  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:15.346496  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:15.373083  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:15.373104  319301 cri.go:96] found id: ""
	I1227 20:13:15.373112  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:15.373165  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:15.376806  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:15.376918  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:15.401683  319301 cri.go:96] found id: ""
	I1227 20:13:15.401708  319301 logs.go:282] 0 containers: []
	W1227 20:13:15.401717  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:15.401725  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:15.401784  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:15.425772  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:15.425796  319301 cri.go:96] found id: ""
	I1227 20:13:15.425804  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:15.425865  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:15.429359  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:15.429426  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:15.457327  319301 cri.go:96] found id: ""
	I1227 20:13:15.457352  319301 logs.go:282] 0 containers: []
	W1227 20:13:15.457361  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:15.457374  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:15.457387  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:15.499826  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:15.499863  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:15.530003  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:15.530040  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:15.557784  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:15.557811  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:15.637950  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:15.637987  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:15.706856  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:15.696364    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.696954    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.699252    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.700375    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.701334    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:15.696364    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.696954    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.699252    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.700375    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.701334    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:15.706878  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:15.706893  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:15.742198  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:15.742227  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:15.838586  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:15.838624  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:15.857986  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:15.858016  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:15.889281  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:15.889313  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:18.468232  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:18.478612  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:18.478682  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:18.506032  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:18.506056  319301 cri.go:96] found id: ""
	I1227 20:13:18.506064  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:18.506116  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:18.509751  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:18.509832  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:18.537503  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:18.537527  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:18.537533  319301 cri.go:96] found id: ""
	I1227 20:13:18.537541  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:18.537645  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:18.543736  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:18.548696  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:18.548770  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:18.574950  319301 cri.go:96] found id: ""
	I1227 20:13:18.574986  319301 logs.go:282] 0 containers: []
	W1227 20:13:18.574996  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:18.575003  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:18.575063  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:18.603311  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:18.603330  319301 cri.go:96] found id: ""
	I1227 20:13:18.603337  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:18.603391  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:18.607317  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:18.607399  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:18.637190  319301 cri.go:96] found id: ""
	I1227 20:13:18.637214  319301 logs.go:282] 0 containers: []
	W1227 20:13:18.637223  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:18.637230  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:18.637290  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:18.664240  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:18.664260  319301 cri.go:96] found id: ""
	I1227 20:13:18.664268  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:18.664323  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:18.667779  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:18.667845  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:18.694174  319301 cri.go:96] found id: ""
	I1227 20:13:18.694198  319301 logs.go:282] 0 containers: []
	W1227 20:13:18.694208  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:18.694222  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:18.694235  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:18.718997  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:18.719027  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:18.745989  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:18.746067  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:18.822381  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:18.822419  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:18.867357  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:18.867387  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:18.970030  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:18.970069  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:18.991124  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:18.991208  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:19.073512  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:19.064985    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.065841    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.067396    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.067963    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.069601    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:19.064985    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.065841    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.067396    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.067963    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.069601    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:19.073537  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:19.073559  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:19.102691  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:19.102717  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:19.156409  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:19.156445  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:21.705847  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:21.716387  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:21.716462  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:21.750665  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:21.750735  319301 cri.go:96] found id: ""
	I1227 20:13:21.750770  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:21.750862  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:21.754653  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:21.754723  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:21.779914  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:21.779938  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:21.779944  319301 cri.go:96] found id: ""
	I1227 20:13:21.779952  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:21.780015  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:21.783993  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:21.787625  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:21.787696  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:21.813514  319301 cri.go:96] found id: ""
	I1227 20:13:21.813543  319301 logs.go:282] 0 containers: []
	W1227 20:13:21.813552  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:21.813559  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:21.813629  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:21.844946  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:21.844968  319301 cri.go:96] found id: ""
	I1227 20:13:21.844976  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:21.845035  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:21.848813  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:21.848884  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:21.874101  319301 cri.go:96] found id: ""
	I1227 20:13:21.874174  319301 logs.go:282] 0 containers: []
	W1227 20:13:21.874190  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:21.874197  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:21.874255  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:21.900432  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:21.900455  319301 cri.go:96] found id: ""
	I1227 20:13:21.900463  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:21.900518  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:21.904020  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:21.904092  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:21.931082  319301 cri.go:96] found id: ""
	I1227 20:13:21.931107  319301 logs.go:282] 0 containers: []
	W1227 20:13:21.931116  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:21.931130  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:21.931173  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:21.977536  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:21.977621  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:22.057131  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:22.057167  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:22.162849  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:22.162890  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:22.181044  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:22.181074  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:22.251501  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:22.243628    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.244178    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.245787    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.246465    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.248081    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:22.243628    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.244178    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.245787    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.246465    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.248081    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:22.251520  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:22.251532  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:22.322039  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:22.322076  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:22.348945  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:22.348981  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:22.376440  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:22.376468  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:22.411192  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:22.411219  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:24.942580  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:24.952758  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:24.952881  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:24.984548  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:24.984572  319301 cri.go:96] found id: ""
	I1227 20:13:24.984580  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:24.984656  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:24.988133  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:24.988203  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:25.026479  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:25.026581  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:25.026603  319301 cri.go:96] found id: ""
	I1227 20:13:25.026645  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:25.026785  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:25.030841  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:25.034716  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:25.034800  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:25.061711  319301 cri.go:96] found id: ""
	I1227 20:13:25.061738  319301 logs.go:282] 0 containers: []
	W1227 20:13:25.061747  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:25.061753  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:25.061810  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:25.089318  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:25.089386  319301 cri.go:96] found id: ""
	I1227 20:13:25.089409  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:25.089517  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:25.093670  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:25.093795  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:25.121407  319301 cri.go:96] found id: ""
	I1227 20:13:25.121525  319301 logs.go:282] 0 containers: []
	W1227 20:13:25.121549  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:25.121569  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:25.121669  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:25.149007  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:25.149080  319301 cri.go:96] found id: ""
	I1227 20:13:25.149103  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:25.149187  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:25.153407  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:25.153596  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:25.179032  319301 cri.go:96] found id: ""
	I1227 20:13:25.179057  319301 logs.go:282] 0 containers: []
	W1227 20:13:25.179066  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:25.179079  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:25.179090  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:25.276200  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:25.276277  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:25.348617  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:25.340243    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.340862    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.343120    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.343588    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.345111    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:25.340243    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.340862    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.343120    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.343588    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.345111    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:25.348638  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:25.348655  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:25.406272  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:25.406306  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:25.452731  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:25.452768  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:25.480251  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:25.480280  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:25.557948  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:25.557985  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:25.593809  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:25.593838  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:25.615397  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:25.615429  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:25.646218  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:25.646248  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:28.174341  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:28.185173  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:28.185244  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:28.211104  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:28.211127  319301 cri.go:96] found id: ""
	I1227 20:13:28.211136  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:28.211191  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:28.214901  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:28.215009  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:28.246215  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:28.246280  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:28.246301  319301 cri.go:96] found id: ""
	I1227 20:13:28.246324  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:28.246405  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:28.250387  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:28.253817  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:28.253888  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:28.287626  319301 cri.go:96] found id: ""
	I1227 20:13:28.287651  319301 logs.go:282] 0 containers: []
	W1227 20:13:28.287659  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:28.287665  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:28.287725  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:28.316933  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:28.316954  319301 cri.go:96] found id: ""
	I1227 20:13:28.316962  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:28.317018  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:28.320933  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:28.321004  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:28.347084  319301 cri.go:96] found id: ""
	I1227 20:13:28.347112  319301 logs.go:282] 0 containers: []
	W1227 20:13:28.347122  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:28.347128  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:28.347185  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:28.378083  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:28.378106  319301 cri.go:96] found id: ""
	I1227 20:13:28.378115  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:28.378169  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:28.382099  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:28.382172  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:28.409209  319301 cri.go:96] found id: ""
	I1227 20:13:28.409235  319301 logs.go:282] 0 containers: []
	W1227 20:13:28.409244  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:28.409257  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:28.409270  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:28.427091  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:28.427120  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:28.490226  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:28.482506    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.483031    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.484594    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.484922    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.486441    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:28.482506    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.483031    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.484594    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.484922    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.486441    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:28.490251  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:28.490265  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:28.531892  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:28.531924  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:28.557604  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:28.557631  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:28.652391  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:28.652428  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:28.680025  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:28.680051  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:28.737147  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:28.737182  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:28.765648  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:28.765682  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:28.843337  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:28.843374  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:31.382818  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:31.393355  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:31.393426  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:31.420305  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:31.420328  319301 cri.go:96] found id: ""
	I1227 20:13:31.420336  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:31.420391  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:31.424001  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:31.424074  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:31.460581  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:31.460615  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:31.460621  319301 cri.go:96] found id: ""
	I1227 20:13:31.460635  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:31.460702  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:31.464544  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:31.468299  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:31.468414  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:31.500491  319301 cri.go:96] found id: ""
	I1227 20:13:31.500517  319301 logs.go:282] 0 containers: []
	W1227 20:13:31.500526  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:31.500533  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:31.500590  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:31.527178  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:31.527203  319301 cri.go:96] found id: ""
	I1227 20:13:31.527211  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:31.527273  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:31.530886  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:31.530980  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:31.558444  319301 cri.go:96] found id: ""
	I1227 20:13:31.558466  319301 logs.go:282] 0 containers: []
	W1227 20:13:31.558475  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:31.558482  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:31.558583  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:31.583987  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:31.584010  319301 cri.go:96] found id: ""
	I1227 20:13:31.584019  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:31.584072  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:31.587656  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:31.587728  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:31.613640  319301 cri.go:96] found id: ""
	I1227 20:13:31.613662  319301 logs.go:282] 0 containers: []
	W1227 20:13:31.613671  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:31.613692  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:31.613708  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:31.642242  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:31.642274  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:31.724401  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:31.724439  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:31.793926  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:31.785945    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.786581    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.788181    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.788659    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.789864    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:31.785945    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.786581    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.788181    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.788659    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.789864    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:31.793989  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:31.794011  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:31.825164  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:31.825193  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:31.877179  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:31.877211  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:31.912284  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:31.912319  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:32.015514  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:32.015558  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:32.034674  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:32.034705  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:32.099008  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:32.099062  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:34.634778  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:34.656177  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:34.656243  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:34.684782  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:34.684801  319301 cri.go:96] found id: ""
	I1227 20:13:34.684810  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:34.684865  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:34.688514  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:34.688585  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:34.712895  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:34.712915  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:34.712921  319301 cri.go:96] found id: ""
	I1227 20:13:34.712928  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:34.712995  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:34.716706  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:34.720270  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:34.720346  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:34.746430  319301 cri.go:96] found id: ""
	I1227 20:13:34.746456  319301 logs.go:282] 0 containers: []
	W1227 20:13:34.746465  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:34.746472  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:34.746530  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:34.773423  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:34.773481  319301 cri.go:96] found id: ""
	I1227 20:13:34.773490  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:34.773560  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:34.777238  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:34.777325  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:34.804429  319301 cri.go:96] found id: ""
	I1227 20:13:34.804455  319301 logs.go:282] 0 containers: []
	W1227 20:13:34.804464  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:34.804471  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:34.804528  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:34.837390  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:34.837412  319301 cri.go:96] found id: ""
	I1227 20:13:34.837421  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:34.837518  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:34.841292  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:34.841362  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:34.882512  319301 cri.go:96] found id: ""
	I1227 20:13:34.882537  319301 logs.go:282] 0 containers: []
	W1227 20:13:34.882547  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:34.882561  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:34.882593  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:34.935722  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:34.935778  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:34.963786  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:34.963815  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:35.068786  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:35.068824  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:35.118359  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:35.118402  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:35.146117  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:35.146144  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:35.223101  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:35.223145  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:35.255059  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:35.255089  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:35.276475  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:35.276510  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:35.351174  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:35.342460    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.343305    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.344856    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.345617    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.347573    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:35.342460    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.343305    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.344856    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.345617    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.347573    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:35.351239  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:35.351268  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:37.881796  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:37.894482  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:37.894556  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:37.924732  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:37.924756  319301 cri.go:96] found id: ""
	I1227 20:13:37.924765  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:37.924821  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:37.928636  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:37.928711  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:37.956752  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:37.956775  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:37.956781  319301 cri.go:96] found id: ""
	I1227 20:13:37.956801  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:37.956860  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:37.960536  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:37.964778  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:37.964879  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:37.998167  319301 cri.go:96] found id: ""
	I1227 20:13:37.998192  319301 logs.go:282] 0 containers: []
	W1227 20:13:37.998202  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:37.998208  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:37.998268  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:38.027828  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:38.027903  319301 cri.go:96] found id: ""
	I1227 20:13:38.027928  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:38.028019  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:38.032285  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:38.032374  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:38.063193  319301 cri.go:96] found id: ""
	I1227 20:13:38.063219  319301 logs.go:282] 0 containers: []
	W1227 20:13:38.063238  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:38.063277  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:38.063338  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:38.100160  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:38.100184  319301 cri.go:96] found id: ""
	I1227 20:13:38.100192  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:38.100248  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:38.104272  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:38.104360  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:38.132286  319301 cri.go:96] found id: ""
	I1227 20:13:38.132319  319301 logs.go:282] 0 containers: []
	W1227 20:13:38.132329  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:38.132343  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:38.132355  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:38.163697  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:38.163723  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:38.181632  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:38.181662  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:38.210225  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:38.210258  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:38.255805  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:38.255842  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:38.358465  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:38.358500  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:38.425713  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:38.417673    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.418194    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.420263    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.420756    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.422182    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:38.417673    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.418194    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.420263    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.420756    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.422182    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:38.425743  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:38.425766  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:38.481423  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:38.481466  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:38.506752  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:38.506783  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:38.536076  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:38.536104  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:41.112032  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:41.122203  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:41.122272  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:41.147769  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:41.147833  319301 cri.go:96] found id: ""
	I1227 20:13:41.147858  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:41.147945  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:41.151581  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:41.151651  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:41.176060  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:41.176078  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:41.176082  319301 cri.go:96] found id: ""
	I1227 20:13:41.176090  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:41.176144  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:41.179877  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:41.183247  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:41.183311  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:41.212692  319301 cri.go:96] found id: ""
	I1227 20:13:41.212717  319301 logs.go:282] 0 containers: []
	W1227 20:13:41.212727  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:41.212733  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:41.212814  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:41.237313  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:41.237335  319301 cri.go:96] found id: ""
	I1227 20:13:41.237343  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:41.237429  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:41.241432  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:41.241552  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:41.274168  319301 cri.go:96] found id: ""
	I1227 20:13:41.274196  319301 logs.go:282] 0 containers: []
	W1227 20:13:41.274206  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:41.274212  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:41.274295  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:41.300597  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:41.300620  319301 cri.go:96] found id: ""
	I1227 20:13:41.300628  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:41.300702  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:41.304360  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:41.304466  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:41.330795  319301 cri.go:96] found id: ""
	I1227 20:13:41.330819  319301 logs.go:282] 0 containers: []
	W1227 20:13:41.330828  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:41.330860  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:41.330885  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:41.358931  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:41.358960  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:41.383514  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:41.383539  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:41.469734  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:41.469771  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:41.573372  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:41.573411  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:41.591886  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:41.591916  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:41.674483  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:41.665884    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.666635    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.667427    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.669130    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.669864    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:41.665884    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.666635    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.667427    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.669130    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.669864    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:41.674507  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:41.674521  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:41.756704  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:41.756741  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:41.803676  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:41.803709  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:41.838752  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:41.838785  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:44.371993  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:44.382732  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:44.382811  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:44.408302  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:44.408324  319301 cri.go:96] found id: ""
	I1227 20:13:44.408332  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:44.408387  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:44.411908  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:44.411977  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:44.438505  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:44.438537  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:44.438543  319301 cri.go:96] found id: ""
	I1227 20:13:44.438551  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:44.438612  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:44.443020  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:44.446843  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:44.446907  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:44.473249  319301 cri.go:96] found id: ""
	I1227 20:13:44.473273  319301 logs.go:282] 0 containers: []
	W1227 20:13:44.473282  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:44.473288  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:44.473344  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:44.506635  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:44.506657  319301 cri.go:96] found id: ""
	I1227 20:13:44.506665  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:44.506719  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:44.510255  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:44.510327  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:44.535681  319301 cri.go:96] found id: ""
	I1227 20:13:44.535706  319301 logs.go:282] 0 containers: []
	W1227 20:13:44.535715  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:44.535722  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:44.535779  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:44.566431  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:44.566454  319301 cri.go:96] found id: ""
	I1227 20:13:44.566463  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:44.566544  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:44.570308  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:44.570429  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:44.596900  319301 cri.go:96] found id: ""
	I1227 20:13:44.596925  319301 logs.go:282] 0 containers: []
	W1227 20:13:44.596935  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:44.596969  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:44.596988  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:44.641306  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:44.641338  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:44.670860  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:44.670887  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:44.698228  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:44.698303  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:44.781609  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:44.781645  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:44.832828  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:44.832857  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:44.851403  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:44.851434  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:44.883766  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:44.883796  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:44.982715  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:44.982754  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:45.102278  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:45.090748   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.091715   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.092803   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.093981   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.094942   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:45.090748   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.091715   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.092803   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.093981   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.094942   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:45.102308  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:45.102333  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:47.711741  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:47.722289  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:47.722355  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:47.752456  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:47.752475  319301 cri.go:96] found id: ""
	I1227 20:13:47.752483  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:47.752545  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:47.756223  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:47.756290  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:47.781994  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:47.782016  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:47.782021  319301 cri.go:96] found id: ""
	I1227 20:13:47.782029  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:47.782082  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:47.785803  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:47.789134  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:47.789202  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:47.819133  319301 cri.go:96] found id: ""
	I1227 20:13:47.819166  319301 logs.go:282] 0 containers: []
	W1227 20:13:47.819176  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:47.819188  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:47.819261  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:47.848513  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:47.848534  319301 cri.go:96] found id: ""
	I1227 20:13:47.848542  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:47.848602  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:47.852477  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:47.852545  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:47.879163  319301 cri.go:96] found id: ""
	I1227 20:13:47.879188  319301 logs.go:282] 0 containers: []
	W1227 20:13:47.879198  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:47.879204  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:47.879288  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:47.906400  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:47.906422  319301 cri.go:96] found id: ""
	I1227 20:13:47.906430  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:47.906487  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:47.910061  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:47.910142  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:47.936751  319301 cri.go:96] found id: ""
	I1227 20:13:47.936822  319301 logs.go:282] 0 containers: []
	W1227 20:13:47.936855  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:47.936885  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:47.936928  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:48.041904  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:48.041941  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:48.059753  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:48.059783  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:48.091794  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:48.091825  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:48.119314  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:48.119341  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:48.167631  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:48.167656  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:48.236954  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:48.226933   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.228070   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.229057   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.230849   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.231433   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:48.226933   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.228070   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.229057   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.230849   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.231433   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:48.236978  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:48.236992  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:48.266604  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:48.266634  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:48.326691  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:48.326727  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:48.370030  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:48.370062  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:50.950604  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:50.960973  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:50.961044  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:50.989711  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:50.989734  319301 cri.go:96] found id: ""
	I1227 20:13:50.989743  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:50.989813  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:50.993765  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:50.993882  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:51.024930  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:51.024955  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:51.024976  319301 cri.go:96] found id: ""
	I1227 20:13:51.025000  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:51.025060  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:51.029133  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:51.034041  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:51.034136  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:51.061567  319301 cri.go:96] found id: ""
	I1227 20:13:51.061590  319301 logs.go:282] 0 containers: []
	W1227 20:13:51.061599  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:51.061608  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:51.061673  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:51.090737  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:51.090764  319301 cri.go:96] found id: ""
	I1227 20:13:51.090773  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:51.090847  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:51.095345  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:51.095432  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:51.123208  319301 cri.go:96] found id: ""
	I1227 20:13:51.123244  319301 logs.go:282] 0 containers: []
	W1227 20:13:51.123254  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:51.123260  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:51.123334  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:51.154295  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:51.154317  319301 cri.go:96] found id: ""
	I1227 20:13:51.154325  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:51.154407  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:51.158410  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:51.158485  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:51.189846  319301 cri.go:96] found id: ""
	I1227 20:13:51.189882  319301 logs.go:282] 0 containers: []
	W1227 20:13:51.189896  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:51.189909  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:51.189921  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:51.286819  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:51.286858  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:51.305366  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:51.305393  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:51.380305  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:51.380343  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:51.441677  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:51.441710  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:51.481914  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:51.481949  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:51.547090  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:51.539048   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.539678   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.541335   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.541928   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.543466   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:51.539048   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.539678   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.541335   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.541928   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.543466   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:51.547154  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:51.547176  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:51.578696  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:51.578725  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:51.608004  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:51.608032  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:51.636360  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:51.636391  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:54.212415  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:54.222852  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:54.222923  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:54.251561  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:54.251580  319301 cri.go:96] found id: ""
	I1227 20:13:54.251587  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:54.251645  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:54.255279  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:54.255354  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:54.292682  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:54.292706  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:54.292711  319301 cri.go:96] found id: ""
	I1227 20:13:54.292719  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:54.292781  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:54.296595  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:54.300085  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:54.300159  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:54.326489  319301 cri.go:96] found id: ""
	I1227 20:13:54.326555  319301 logs.go:282] 0 containers: []
	W1227 20:13:54.326579  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:54.326605  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:54.326696  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:54.353313  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:54.353338  319301 cri.go:96] found id: ""
	I1227 20:13:54.353347  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:54.353403  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:54.356927  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:54.356999  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:54.381581  319301 cri.go:96] found id: ""
	I1227 20:13:54.381617  319301 logs.go:282] 0 containers: []
	W1227 20:13:54.381626  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:54.381633  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:54.381691  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:54.414363  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:54.414383  319301 cri.go:96] found id: ""
	I1227 20:13:54.414391  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:54.414446  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:54.418045  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:54.418114  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:54.449206  319301 cri.go:96] found id: ""
	I1227 20:13:54.449229  319301 logs.go:282] 0 containers: []
	W1227 20:13:54.449238  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:54.449252  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:54.449264  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:54.517227  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:54.508584   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.509203   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.510795   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.511388   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.512826   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:54.508584   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.509203   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.510795   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.511388   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.512826   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:54.517253  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:54.517266  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:54.544360  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:54.544391  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:54.599513  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:54.599547  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:54.644818  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:54.644847  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:54.688568  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:54.688609  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:54.713724  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:54.713751  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:54.741842  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:54.741868  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:54.820175  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:54.820209  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:54.925045  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:54.925099  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:57.443738  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:57.454148  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:57.454219  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:57.484004  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:57.484071  319301 cri.go:96] found id: ""
	I1227 20:13:57.484087  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:57.484154  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:57.487937  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:57.488009  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:57.513954  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:57.513978  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:57.513983  319301 cri.go:96] found id: ""
	I1227 20:13:57.513991  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:57.514048  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:57.517734  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:57.521248  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:57.521322  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:57.548709  319301 cri.go:96] found id: ""
	I1227 20:13:57.548734  319301 logs.go:282] 0 containers: []
	W1227 20:13:57.548743  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:57.548749  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:57.548807  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:57.574830  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:57.574853  319301 cri.go:96] found id: ""
	I1227 20:13:57.574862  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:57.574919  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:57.578643  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:57.578770  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:57.604928  319301 cri.go:96] found id: ""
	I1227 20:13:57.604952  319301 logs.go:282] 0 containers: []
	W1227 20:13:57.604961  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:57.604967  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:57.605037  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:57.636096  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:57.636118  319301 cri.go:96] found id: ""
	I1227 20:13:57.636126  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:57.636181  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:57.640206  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:57.640289  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:57.664867  319301 cri.go:96] found id: ""
	I1227 20:13:57.664893  319301 logs.go:282] 0 containers: []
	W1227 20:13:57.664903  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:57.664918  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:57.664930  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:57.760571  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:57.760614  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:57.779034  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:57.779063  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:57.860979  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:57.853801   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.854291   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.855825   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.856219   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.857717   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:57.853801   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.854291   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.855825   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.856219   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.857717   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:57.861005  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:57.861030  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:57.891248  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:57.891279  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:57.951146  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:57.951184  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:57.983957  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:57.983983  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:58.027711  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:58.027751  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:58.057942  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:58.057967  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:58.134700  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:58.134737  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:00.665876  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:00.676353  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:00.676426  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:00.704251  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:00.704274  319301 cri.go:96] found id: ""
	I1227 20:14:00.704284  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:00.704369  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:00.708101  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:00.708172  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:00.744575  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:00.744598  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:00.744602  319301 cri.go:96] found id: ""
	I1227 20:14:00.744610  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:00.744681  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:00.748672  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:00.752393  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:00.752495  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:00.778438  319301 cri.go:96] found id: ""
	I1227 20:14:00.778463  319301 logs.go:282] 0 containers: []
	W1227 20:14:00.778472  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:00.778478  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:00.778568  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:00.804119  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:00.804143  319301 cri.go:96] found id: ""
	I1227 20:14:00.804152  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:00.804243  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:00.807914  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:00.808018  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:00.837548  319301 cri.go:96] found id: ""
	I1227 20:14:00.837626  319301 logs.go:282] 0 containers: []
	W1227 20:14:00.837640  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:00.837648  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:00.837723  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:00.864504  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:00.864527  319301 cri.go:96] found id: ""
	I1227 20:14:00.864535  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:00.864590  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:00.868408  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:00.868482  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:00.897150  319301 cri.go:96] found id: ""
	I1227 20:14:00.897173  319301 logs.go:282] 0 containers: []
	W1227 20:14:00.897182  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:00.897197  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:00.897210  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:00.998644  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:00.998688  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:01.021375  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:01.021415  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:01.054456  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:01.054487  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:01.115661  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:01.115700  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:01.161388  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:01.161423  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:01.192518  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:01.192549  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:01.275490  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:01.275523  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:01.341916  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:01.334014   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.334408   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.335994   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.336428   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.337960   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:01.334014   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.334408   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.335994   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.336428   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.337960   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:01.341937  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:01.341950  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:01.368174  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:01.368205  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:03.909559  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:03.920151  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:03.920223  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:03.950304  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:03.950321  319301 cri.go:96] found id: ""
	I1227 20:14:03.950329  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:03.950383  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:03.954284  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:03.954356  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:03.991836  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:03.991917  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:03.991937  319301 cri.go:96] found id: ""
	I1227 20:14:03.991960  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:03.992044  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:03.996532  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:04.000198  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:04.000315  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:04.031549  319301 cri.go:96] found id: ""
	I1227 20:14:04.031622  319301 logs.go:282] 0 containers: []
	W1227 20:14:04.031647  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:04.031671  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:04.031765  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:04.060260  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:04.060328  319301 cri.go:96] found id: ""
	I1227 20:14:04.060356  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:04.060444  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:04.064496  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:04.064588  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:04.102911  319301 cri.go:96] found id: ""
	I1227 20:14:04.103013  319301 logs.go:282] 0 containers: []
	W1227 20:14:04.103124  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:04.103169  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:04.103319  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:04.131147  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:04.131212  319301 cri.go:96] found id: ""
	I1227 20:14:04.131234  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:04.131327  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:04.135698  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:04.135819  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:04.164124  319301 cri.go:96] found id: ""
	I1227 20:14:04.164202  319301 logs.go:282] 0 containers: []
	W1227 20:14:04.164224  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:04.164266  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:04.164297  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:04.182491  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:04.182521  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:04.211036  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:04.211068  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:04.256784  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:04.256821  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:04.348299  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:04.348336  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:04.450573  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:04.450613  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:04.516283  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:04.506999   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.507835   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.510141   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.510856   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.512527   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:04.506999   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.507835   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.510141   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.510856   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.512527   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:04.516305  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:04.516319  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:04.576841  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:04.576872  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:04.614008  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:04.614035  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:04.641690  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:04.641719  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:07.176073  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:07.186712  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:07.186783  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:07.211686  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:07.211709  319301 cri.go:96] found id: ""
	I1227 20:14:07.211718  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:07.211775  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:07.215681  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:07.215756  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:07.240540  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:07.240563  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:07.240569  319301 cri.go:96] found id: ""
	I1227 20:14:07.240577  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:07.240630  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:07.245279  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:07.249179  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:07.249250  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:07.276774  319301 cri.go:96] found id: ""
	I1227 20:14:07.276800  319301 logs.go:282] 0 containers: []
	W1227 20:14:07.276810  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:07.276816  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:07.276873  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:07.304802  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:07.304821  319301 cri.go:96] found id: ""
	I1227 20:14:07.304829  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:07.304883  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:07.308534  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:07.308604  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:07.336318  319301 cri.go:96] found id: ""
	I1227 20:14:07.336344  319301 logs.go:282] 0 containers: []
	W1227 20:14:07.336354  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:07.336360  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:07.336423  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:07.362751  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:07.362771  319301 cri.go:96] found id: ""
	I1227 20:14:07.362780  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:07.362840  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:07.366846  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:07.366918  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:07.395130  319301 cri.go:96] found id: ""
	I1227 20:14:07.395152  319301 logs.go:282] 0 containers: []
	W1227 20:14:07.395161  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:07.395175  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:07.395187  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:07.491440  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:07.491518  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:07.527740  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:07.527770  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:07.558436  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:07.558464  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:07.588229  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:07.588259  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:07.607165  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:07.607197  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:07.677755  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:07.668928   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.669821   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.671526   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.672177   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.673864   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:07.668928   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.669821   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.671526   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.672177   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.673864   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:07.677777  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:07.677791  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:07.739114  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:07.739152  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:07.784369  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:07.784406  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:07.810544  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:07.810571  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:10.388063  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:10.398699  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:10.398769  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:10.429540  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:10.429607  319301 cri.go:96] found id: ""
	I1227 20:14:10.429631  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:10.429721  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:10.433534  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:10.433651  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:10.459275  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:10.459297  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:10.459303  319301 cri.go:96] found id: ""
	I1227 20:14:10.459310  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:10.459366  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:10.463124  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:10.466705  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:10.466798  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:10.492126  319301 cri.go:96] found id: ""
	I1227 20:14:10.492155  319301 logs.go:282] 0 containers: []
	W1227 20:14:10.492173  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:10.492184  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:10.492242  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:10.518226  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:10.518248  319301 cri.go:96] found id: ""
	I1227 20:14:10.518256  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:10.518364  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:10.522989  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:10.523096  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:10.549695  319301 cri.go:96] found id: ""
	I1227 20:14:10.549722  319301 logs.go:282] 0 containers: []
	W1227 20:14:10.549732  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:10.549738  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:10.549798  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:10.579366  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:10.579390  319301 cri.go:96] found id: ""
	I1227 20:14:10.579398  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:10.579455  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:10.583638  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:10.583714  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:10.615082  319301 cri.go:96] found id: ""
	I1227 20:14:10.615105  319301 logs.go:282] 0 containers: []
	W1227 20:14:10.615113  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:10.615130  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:10.615142  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:10.683394  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:10.674472   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.675801   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.676387   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.678136   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.678634   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:10.674472   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.675801   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.676387   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.678136   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.678634   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:10.683412  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:10.683425  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:10.727898  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:10.727931  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:10.753009  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:10.753042  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:10.782677  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:10.782703  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:10.866110  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:10.866147  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:10.959413  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:10.959452  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:10.977909  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:10.977941  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:11.005943  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:11.005969  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:11.074309  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:11.074346  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:13.614417  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:13.625578  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:13.625646  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:13.652507  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:13.652525  319301 cri.go:96] found id: ""
	I1227 20:14:13.652534  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:13.652588  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:13.656545  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:13.656609  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:13.683073  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:13.683097  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:13.683102  319301 cri.go:96] found id: ""
	I1227 20:14:13.683110  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:13.683166  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:13.686968  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:13.690405  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:13.690466  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:13.717840  319301 cri.go:96] found id: ""
	I1227 20:14:13.717864  319301 logs.go:282] 0 containers: []
	W1227 20:14:13.717873  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:13.717879  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:13.717938  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:13.746028  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:13.746049  319301 cri.go:96] found id: ""
	I1227 20:14:13.746058  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:13.746117  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:13.749660  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:13.749741  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:13.775234  319301 cri.go:96] found id: ""
	I1227 20:14:13.775301  319301 logs.go:282] 0 containers: []
	W1227 20:14:13.775322  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:13.775330  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:13.775388  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:13.800618  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:13.800642  319301 cri.go:96] found id: ""
	I1227 20:14:13.800650  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:13.800708  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:13.804545  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:13.804619  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:13.832761  319301 cri.go:96] found id: ""
	I1227 20:14:13.832786  319301 logs.go:282] 0 containers: []
	W1227 20:14:13.832795  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:13.832811  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:13.832824  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:13.851133  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:13.851163  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:13.926603  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:13.926681  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:13.961517  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:13.961544  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:14.069694  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:14.069739  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:14.151483  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:14.142577   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.143391   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.145037   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.145551   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.147508   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:14.142577   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.143391   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.145037   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.145551   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.147508   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:14.151505  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:14.151520  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:14.181727  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:14.181758  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:14.240301  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:14.240339  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:14.300709  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:14.300743  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:14.336466  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:14.336498  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:16.865634  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:16.876358  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:16.876432  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:16.904188  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:16.904253  319301 cri.go:96] found id: ""
	I1227 20:14:16.904276  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:16.904367  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:16.908220  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:16.908322  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:16.937896  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:16.937919  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:16.937924  319301 cri.go:96] found id: ""
	I1227 20:14:16.937932  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:16.937986  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:16.942670  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:16.946301  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:16.946387  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:16.985586  319301 cri.go:96] found id: ""
	I1227 20:14:16.985609  319301 logs.go:282] 0 containers: []
	W1227 20:14:16.985618  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:16.985624  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:16.985683  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:17.013996  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:17.014029  319301 cri.go:96] found id: ""
	I1227 20:14:17.014039  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:17.014137  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:17.018935  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:17.019008  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:17.052484  319301 cri.go:96] found id: ""
	I1227 20:14:17.052561  319301 logs.go:282] 0 containers: []
	W1227 20:14:17.052583  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:17.052604  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:17.052695  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:17.081622  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:17.081695  319301 cri.go:96] found id: ""
	I1227 20:14:17.081718  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:17.081788  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:17.085690  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:17.085794  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:17.112049  319301 cri.go:96] found id: ""
	I1227 20:14:17.112074  319301 logs.go:282] 0 containers: []
	W1227 20:14:17.112082  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:17.112098  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:17.112141  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:17.137714  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:17.137743  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:17.213490  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:17.213533  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:17.246326  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:17.246356  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:17.328320  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:17.320845   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.321569   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.322897   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.323352   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.324795   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:17.320845   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.321569   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.322897   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.323352   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.324795   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:17.328340  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:17.328353  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:17.385541  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:17.385578  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:17.427419  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:17.427449  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:17.452174  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:17.452206  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:17.546685  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:17.546724  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:17.565295  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:17.565332  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:20.098978  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:20.111051  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:20.111126  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:20.137851  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:20.137927  319301 cri.go:96] found id: ""
	I1227 20:14:20.137963  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:20.138055  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:20.142900  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:20.143001  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:20.170010  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:20.170087  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:20.170109  319301 cri.go:96] found id: ""
	I1227 20:14:20.170137  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:20.170221  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:20.175063  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:20.178747  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:20.178824  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:20.206381  319301 cri.go:96] found id: ""
	I1227 20:14:20.206409  319301 logs.go:282] 0 containers: []
	W1227 20:14:20.206418  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:20.206425  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:20.206485  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:20.233473  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:20.233499  319301 cri.go:96] found id: ""
	I1227 20:14:20.233508  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:20.233571  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:20.237997  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:20.238070  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:20.262995  319301 cri.go:96] found id: ""
	I1227 20:14:20.263067  319301 logs.go:282] 0 containers: []
	W1227 20:14:20.263092  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:20.263099  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:20.263170  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:20.288462  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:20.288537  319301 cri.go:96] found id: ""
	I1227 20:14:20.288566  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:20.288647  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:20.292436  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:20.292550  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:20.322573  319301 cri.go:96] found id: ""
	I1227 20:14:20.322596  319301 logs.go:282] 0 containers: []
	W1227 20:14:20.322605  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:20.322621  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:20.322633  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:20.432211  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:20.432245  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:20.496754  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:20.496791  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:20.540278  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:20.540351  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:20.567122  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:20.567152  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:20.648855  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:20.648895  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:20.667153  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:20.667185  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:20.736076  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:20.727815   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.728362   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.730119   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.730829   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.732497   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:20.727815   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.728362   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.730119   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.730829   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.732497   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:20.736098  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:20.736112  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:20.762277  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:20.762304  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:20.800871  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:20.800901  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:23.331772  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:23.342153  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:23.342227  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:23.367402  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:23.367424  319301 cri.go:96] found id: ""
	I1227 20:14:23.367433  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:23.367489  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:23.371067  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:23.371137  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:23.397005  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:23.397081  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:23.397101  319301 cri.go:96] found id: ""
	I1227 20:14:23.397127  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:23.397212  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:23.401002  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:23.404386  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:23.404490  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:23.430285  319301 cri.go:96] found id: ""
	I1227 20:14:23.430309  319301 logs.go:282] 0 containers: []
	W1227 20:14:23.430318  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:23.430326  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:23.430383  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:23.461494  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:23.461517  319301 cri.go:96] found id: ""
	I1227 20:14:23.461526  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:23.461578  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:23.465337  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:23.465409  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:23.496783  319301 cri.go:96] found id: ""
	I1227 20:14:23.496808  319301 logs.go:282] 0 containers: []
	W1227 20:14:23.496818  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:23.496824  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:23.496881  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:23.522580  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:23.522602  319301 cri.go:96] found id: ""
	I1227 20:14:23.522610  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:23.522665  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:23.526436  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:23.526519  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:23.557267  319301 cri.go:96] found id: ""
	I1227 20:14:23.557299  319301 logs.go:282] 0 containers: []
	W1227 20:14:23.557309  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:23.557325  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:23.557336  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:23.584981  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:23.585010  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:23.648213  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:23.648252  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:23.695771  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:23.695847  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:23.726135  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:23.726165  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:23.810400  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:23.810440  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:23.916410  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:23.916451  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:23.945753  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:23.945825  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:23.996874  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:23.996903  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:24.015806  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:24.015853  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:24.093634  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:24.083702   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.084655   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.086499   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.086863   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.088426   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:24.083702   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.084655   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.086499   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.086863   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.088426   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:26.595192  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:26.607312  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:26.607388  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:26.644526  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:26.644546  319301 cri.go:96] found id: ""
	I1227 20:14:26.644554  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:26.644613  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:26.648515  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:26.648588  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:26.674360  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:26.674383  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:26.674387  319301 cri.go:96] found id: ""
	I1227 20:14:26.674395  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:26.674451  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:26.678114  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:26.681548  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:26.681619  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:26.707823  319301 cri.go:96] found id: ""
	I1227 20:14:26.707847  319301 logs.go:282] 0 containers: []
	W1227 20:14:26.707856  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:26.707863  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:26.707918  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:26.736808  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:26.736830  319301 cri.go:96] found id: ""
	I1227 20:14:26.736839  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:26.736910  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:26.740449  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:26.740516  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:26.767979  319301 cri.go:96] found id: ""
	I1227 20:14:26.768005  319301 logs.go:282] 0 containers: []
	W1227 20:14:26.768014  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:26.768020  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:26.768093  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:26.794399  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:26.794419  319301 cri.go:96] found id: ""
	I1227 20:14:26.794428  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:26.794482  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:26.798158  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:26.798242  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:26.822859  319301 cri.go:96] found id: ""
	I1227 20:14:26.822883  319301 logs.go:282] 0 containers: []
	W1227 20:14:26.822893  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:26.822924  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:26.822946  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:26.868214  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:26.868238  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:26.932994  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:26.933029  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:26.977303  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:26.977340  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:27.068000  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:27.068040  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:27.171536  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:27.171574  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:27.190535  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:27.190562  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:27.216736  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:27.216762  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:27.243411  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:27.243439  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:27.295099  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:27.295126  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:27.357878  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:27.350559   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.350955   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.352482   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.352824   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.354320   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:27.350559   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.350955   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.352482   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.352824   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.354320   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:29.858681  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:29.868776  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:29.868844  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:29.896575  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:29.896597  319301 cri.go:96] found id: ""
	I1227 20:14:29.896605  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:29.896686  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:29.900141  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:29.900230  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:29.933885  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:29.933909  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:29.933915  319301 cri.go:96] found id: ""
	I1227 20:14:29.933922  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:29.933995  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:29.937419  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:29.940597  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:29.940661  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:29.985795  319301 cri.go:96] found id: ""
	I1227 20:14:29.985826  319301 logs.go:282] 0 containers: []
	W1227 20:14:29.985836  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:29.985843  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:29.985919  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:30.025679  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:30.025700  319301 cri.go:96] found id: ""
	I1227 20:14:30.025709  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:30.025777  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:30.049697  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:30.049787  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:30.082890  319301 cri.go:96] found id: ""
	I1227 20:14:30.082916  319301 logs.go:282] 0 containers: []
	W1227 20:14:30.082926  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:30.082934  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:30.083006  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:30.119124  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:30.119148  319301 cri.go:96] found id: ""
	I1227 20:14:30.119156  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:30.119217  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:30.123169  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:30.123244  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:30.151766  319301 cri.go:96] found id: ""
	I1227 20:14:30.151790  319301 logs.go:282] 0 containers: []
	W1227 20:14:30.151799  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:30.151816  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:30.151828  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:30.169326  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:30.169357  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:30.199380  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:30.199412  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:30.265121  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:30.265163  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:30.356459  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:30.356498  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:30.392984  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:30.393013  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:30.499474  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:30.499511  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:30.571342  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:30.561186   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.563435   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.564195   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.566014   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.566655   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:30.561186   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.563435   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.564195   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.566014   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.566655   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:30.571365  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:30.571378  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:30.615172  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:30.615207  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:30.644774  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:30.644803  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:33.172504  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:33.183855  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:33.183927  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:33.214210  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:33.214232  319301 cri.go:96] found id: ""
	I1227 20:14:33.214241  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:33.214307  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:33.218161  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:33.218245  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:33.244477  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:33.244501  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:33.244506  319301 cri.go:96] found id: ""
	I1227 20:14:33.244513  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:33.244574  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:33.248725  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:33.252096  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:33.252166  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:33.284273  319301 cri.go:96] found id: ""
	I1227 20:14:33.284304  319301 logs.go:282] 0 containers: []
	W1227 20:14:33.284317  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:33.284327  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:33.284406  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:33.311094  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:33.311117  319301 cri.go:96] found id: ""
	I1227 20:14:33.311125  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:33.311184  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:33.315375  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:33.315450  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:33.344846  319301 cri.go:96] found id: ""
	I1227 20:14:33.344870  319301 logs.go:282] 0 containers: []
	W1227 20:14:33.344879  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:33.344886  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:33.344945  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:33.370949  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:33.371011  319301 cri.go:96] found id: ""
	I1227 20:14:33.371033  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:33.371093  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:33.375136  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:33.375211  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:33.403339  319301 cri.go:96] found id: ""
	I1227 20:14:33.403361  319301 logs.go:282] 0 containers: []
	W1227 20:14:33.403370  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:33.403385  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:33.403396  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:33.484170  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:33.484207  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:33.516735  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:33.516766  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:33.534421  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:33.534452  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:33.613759  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:33.613800  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:33.651422  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:33.651450  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:33.759905  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:33.759949  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:33.827184  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:33.819142   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.819867   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.821423   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.822059   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.823552   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:33.819142   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.819867   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.821423   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.822059   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.823552   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:33.827217  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:33.827232  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:33.858891  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:33.858926  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:33.904092  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:33.904128  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:36.431294  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:36.449106  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:36.449178  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:36.480392  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:36.480416  319301 cri.go:96] found id: ""
	I1227 20:14:36.480425  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:36.480481  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:36.485341  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:36.485424  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:36.515111  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:36.515185  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:36.515199  319301 cri.go:96] found id: ""
	I1227 20:14:36.515225  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:36.515283  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:36.519737  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:36.523801  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:36.523877  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:36.550603  319301 cri.go:96] found id: ""
	I1227 20:14:36.550628  319301 logs.go:282] 0 containers: []
	W1227 20:14:36.550637  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:36.550644  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:36.550699  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:36.586466  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:36.586492  319301 cri.go:96] found id: ""
	I1227 20:14:36.586500  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:36.586577  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:36.590067  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:36.590139  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:36.621202  319301 cri.go:96] found id: ""
	I1227 20:14:36.621235  319301 logs.go:282] 0 containers: []
	W1227 20:14:36.621244  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:36.621250  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:36.621308  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:36.647269  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:36.647292  319301 cri.go:96] found id: ""
	I1227 20:14:36.647301  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:36.647379  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:36.651085  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:36.651160  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:36.677749  319301 cri.go:96] found id: ""
	I1227 20:14:36.677778  319301 logs.go:282] 0 containers: []
	W1227 20:14:36.677788  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:36.677804  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:36.677817  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:36.725080  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:36.725110  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:36.755181  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:36.755211  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:36.784468  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:36.784496  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:36.816908  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:36.816940  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:36.834015  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:36.834047  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:36.900869  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:36.892648   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.893851   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.894994   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.895421   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.896907   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:36.892648   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.893851   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.894994   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.895421   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.896907   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:36.900892  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:36.900908  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:36.960391  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:36.960427  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:37.045275  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:37.045325  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:37.148150  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:37.148188  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:39.676095  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:39.686901  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:39.686981  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:39.713632  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:39.713662  319301 cri.go:96] found id: ""
	I1227 20:14:39.713681  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:39.713758  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:39.717685  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:39.717762  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:39.744240  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:39.744264  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:39.744269  319301 cri.go:96] found id: ""
	I1227 20:14:39.744277  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:39.744330  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:39.748168  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:39.751671  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:39.751770  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:39.781268  319301 cri.go:96] found id: ""
	I1227 20:14:39.781293  319301 logs.go:282] 0 containers: []
	W1227 20:14:39.781302  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:39.781309  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:39.781401  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:39.810785  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:39.810807  319301 cri.go:96] found id: ""
	I1227 20:14:39.810815  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:39.810888  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:39.814715  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:39.814784  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:39.841437  319301 cri.go:96] found id: ""
	I1227 20:14:39.841493  319301 logs.go:282] 0 containers: []
	W1227 20:14:39.841503  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:39.841508  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:39.841573  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:39.868907  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:39.868925  319301 cri.go:96] found id: ""
	I1227 20:14:39.868933  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:39.868987  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:39.872674  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:39.872744  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:39.900867  319301 cri.go:96] found id: ""
	I1227 20:14:39.900943  319301 logs.go:282] 0 containers: []
	W1227 20:14:39.900966  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:39.901013  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:39.901043  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:39.918593  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:39.918625  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:39.949056  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:39.949087  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:39.981788  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:39.981818  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:40.105238  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:40.105377  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:40.191666  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:40.183905   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.184449   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.186006   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.186447   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.187950   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:40.183905   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.184449   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.186006   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.186447   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.187950   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:40.191684  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:40.191701  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:40.262140  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:40.262180  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:40.310808  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:40.310845  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:40.337783  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:40.337811  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:40.368704  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:40.368733  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:42.951291  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:42.961621  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:42.961714  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:42.996358  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:42.996382  319301 cri.go:96] found id: ""
	I1227 20:14:42.996391  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:42.996476  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:43.000167  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:43.000258  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:43.042517  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:43.042542  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:43.042547  319301 cri.go:96] found id: ""
	I1227 20:14:43.042555  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:43.042636  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:43.046498  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:43.049992  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:43.050069  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:43.076653  319301 cri.go:96] found id: ""
	I1227 20:14:43.076681  319301 logs.go:282] 0 containers: []
	W1227 20:14:43.076690  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:43.076697  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:43.076755  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:43.104355  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:43.104379  319301 cri.go:96] found id: ""
	I1227 20:14:43.104388  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:43.104444  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:43.108064  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:43.108137  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:43.136746  319301 cri.go:96] found id: ""
	I1227 20:14:43.136771  319301 logs.go:282] 0 containers: []
	W1227 20:14:43.136780  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:43.136786  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:43.136856  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:43.167333  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:43.167354  319301 cri.go:96] found id: ""
	I1227 20:14:43.167362  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:43.167417  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:43.171054  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:43.171167  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:43.196510  319301 cri.go:96] found id: ""
	I1227 20:14:43.196539  319301 logs.go:282] 0 containers: []
	W1227 20:14:43.196548  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:43.196562  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:43.196573  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:43.246188  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:43.246222  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:43.280060  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:43.280088  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:43.364679  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:43.364718  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:43.383405  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:43.383434  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:43.412457  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:43.412484  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:43.441225  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:43.441251  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:43.483277  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:43.483305  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:43.587381  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:43.587418  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:43.657966  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:43.648616   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.649341   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.651243   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.652029   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.653574   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:43.648616   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.649341   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.651243   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.652029   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.653574   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:43.657996  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:43.658011  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:46.217780  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:46.229546  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:46.229622  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:46.255054  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:46.255074  319301 cri.go:96] found id: ""
	I1227 20:14:46.255082  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:46.255135  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:46.258848  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:46.258946  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:46.292684  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:46.292758  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:46.292778  319301 cri.go:96] found id: ""
	I1227 20:14:46.292803  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:46.292889  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:46.296621  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:46.300035  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:46.300104  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:46.325669  319301 cri.go:96] found id: ""
	I1227 20:14:46.325694  319301 logs.go:282] 0 containers: []
	W1227 20:14:46.325703  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:46.325709  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:46.325766  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:46.352094  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:46.352159  319301 cri.go:96] found id: ""
	I1227 20:14:46.352182  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:46.352268  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:46.355963  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:46.356077  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:46.381620  319301 cri.go:96] found id: ""
	I1227 20:14:46.381646  319301 logs.go:282] 0 containers: []
	W1227 20:14:46.381656  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:46.381662  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:46.381738  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:46.410104  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:46.410127  319301 cri.go:96] found id: ""
	I1227 20:14:46.410135  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:46.410191  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:46.413648  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:46.413715  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:46.440709  319301 cri.go:96] found id: ""
	I1227 20:14:46.440734  319301 logs.go:282] 0 containers: []
	W1227 20:14:46.440745  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:46.440759  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:46.440781  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:46.469916  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:46.469945  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:46.571819  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:46.571854  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:46.590503  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:46.590531  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:46.624094  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:46.624120  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:46.655415  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:46.655444  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:46.727967  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:46.719794   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.720498   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.722193   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.722714   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.724244   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:46.719794   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.720498   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.722193   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.722714   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.724244   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:46.727989  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:46.728003  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:46.787862  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:46.787899  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:46.848761  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:46.848797  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:46.883658  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:46.883687  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:49.466063  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:49.476365  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:49.476460  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:49.502643  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:49.502665  319301 cri.go:96] found id: ""
	I1227 20:14:49.502673  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:49.502727  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:49.506369  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:49.506443  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:49.532399  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:49.532421  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:49.532427  319301 cri.go:96] found id: ""
	I1227 20:14:49.532435  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:49.532488  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:49.536133  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:49.539580  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:49.539645  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:49.566501  319301 cri.go:96] found id: ""
	I1227 20:14:49.566528  319301 logs.go:282] 0 containers: []
	W1227 20:14:49.566537  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:49.566544  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:49.566605  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:49.602221  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:49.602245  319301 cri.go:96] found id: ""
	I1227 20:14:49.602254  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:49.602316  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:49.606305  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:49.606375  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:49.632906  319301 cri.go:96] found id: ""
	I1227 20:14:49.632931  319301 logs.go:282] 0 containers: []
	W1227 20:14:49.632941  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:49.632946  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:49.633012  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:49.660593  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:49.660616  319301 cri.go:96] found id: ""
	I1227 20:14:49.660625  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:49.660683  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:49.664343  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:49.664414  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:49.691030  319301 cri.go:96] found id: ""
	I1227 20:14:49.691093  319301 logs.go:282] 0 containers: []
	W1227 20:14:49.691110  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:49.691125  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:49.691137  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:49.786516  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:49.786552  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:49.837581  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:49.837615  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:49.923089  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:49.923126  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:49.964776  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:49.964806  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:49.984138  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:49.984166  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:50.053988  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:50.045799   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.046531   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.048064   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.048564   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.050125   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:50.045799   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.046531   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.048064   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.048564   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.050125   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:50.054052  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:50.054072  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:50.080753  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:50.080847  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:50.160335  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:50.160373  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:50.189801  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:50.189831  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:52.722382  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:52.732860  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:52.732954  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:52.759105  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:52.759129  319301 cri.go:96] found id: ""
	I1227 20:14:52.759140  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:52.759192  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:52.763086  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:52.763152  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:52.789342  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:52.789365  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:52.789370  319301 cri.go:96] found id: ""
	I1227 20:14:52.789378  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:52.789441  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:52.793045  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:52.796599  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:52.796677  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:52.821951  319301 cri.go:96] found id: ""
	I1227 20:14:52.821975  319301 logs.go:282] 0 containers: []
	W1227 20:14:52.821984  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:52.821990  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:52.822048  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:52.848207  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:52.848227  319301 cri.go:96] found id: ""
	I1227 20:14:52.848235  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:52.848290  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:52.852016  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:52.852114  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:52.878718  319301 cri.go:96] found id: ""
	I1227 20:14:52.878752  319301 logs.go:282] 0 containers: []
	W1227 20:14:52.878761  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:52.878768  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:52.878826  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:52.905928  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:52.906001  319301 cri.go:96] found id: ""
	I1227 20:14:52.906023  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:52.906113  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:52.910178  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:52.910250  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:52.937172  319301 cri.go:96] found id: ""
	I1227 20:14:52.937209  319301 logs.go:282] 0 containers: []
	W1227 20:14:52.937218  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:52.937231  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:52.937249  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:52.966131  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:52.966162  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:53.003464  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:53.003490  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:53.021719  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:53.021777  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:53.091033  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:53.081906   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.083382   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.084066   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.085728   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.086021   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:53.081906   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.083382   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.084066   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.085728   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.086021   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:53.091054  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:53.091067  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:53.153878  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:53.153918  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:53.184615  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:53.184643  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:53.268968  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:53.269005  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:53.374253  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:53.374287  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:53.403008  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:53.403044  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:55.952353  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:55.962631  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:55.962719  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:55.995078  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:55.995100  319301 cri.go:96] found id: ""
	I1227 20:14:55.995108  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:55.995174  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:55.999787  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:55.999857  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:56.034785  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:56.034809  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:56.034814  319301 cri.go:96] found id: ""
	I1227 20:14:56.034821  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:56.034886  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:56.039026  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:56.043109  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:56.043239  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:56.076322  319301 cri.go:96] found id: ""
	I1227 20:14:56.076349  319301 logs.go:282] 0 containers: []
	W1227 20:14:56.076358  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:56.076365  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:56.076450  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:56.105910  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:56.105937  319301 cri.go:96] found id: ""
	I1227 20:14:56.105945  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:56.106024  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:56.109833  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:56.109951  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:56.136658  319301 cri.go:96] found id: ""
	I1227 20:14:56.136681  319301 logs.go:282] 0 containers: []
	W1227 20:14:56.136690  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:56.136696  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:56.136751  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:56.162379  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:56.162402  319301 cri.go:96] found id: ""
	I1227 20:14:56.162409  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:56.162464  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:56.165959  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:56.166030  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:56.193023  319301 cri.go:96] found id: ""
	I1227 20:14:56.193057  319301 logs.go:282] 0 containers: []
	W1227 20:14:56.193066  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:56.193097  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:56.193131  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:56.219549  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:56.219577  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:56.255190  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:56.255218  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:56.326655  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:56.326690  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:56.369967  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:56.370002  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:56.449778  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:56.449815  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:56.481804  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:56.481833  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:56.580473  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:56.580507  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:56.597748  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:56.597781  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:56.675164  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:56.667282   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.668004   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.669569   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.670031   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.671487   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:56.667282   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.668004   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.669569   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.670031   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.671487   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:56.675187  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:56.675210  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:59.204907  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:59.215384  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:59.215464  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:59.241010  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:59.241041  319301 cri.go:96] found id: ""
	I1227 20:14:59.241056  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:59.241157  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:59.245340  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:59.245433  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:59.282857  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:59.282880  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:59.282886  319301 cri.go:96] found id: ""
	I1227 20:14:59.282893  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:59.282945  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:59.286535  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:59.289810  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:59.289879  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:59.317473  319301 cri.go:96] found id: ""
	I1227 20:14:59.317509  319301 logs.go:282] 0 containers: []
	W1227 20:14:59.317517  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:59.317524  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:59.317593  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:59.350932  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:59.350952  319301 cri.go:96] found id: ""
	I1227 20:14:59.350961  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:59.351015  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:59.354698  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:59.354768  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:59.381626  319301 cri.go:96] found id: ""
	I1227 20:14:59.381660  319301 logs.go:282] 0 containers: []
	W1227 20:14:59.381669  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:59.381675  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:59.381730  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:59.408107  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:59.408130  319301 cri.go:96] found id: ""
	I1227 20:14:59.408140  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:59.408216  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:59.411771  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:59.411846  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:59.436633  319301 cri.go:96] found id: ""
	I1227 20:14:59.436660  319301 logs.go:282] 0 containers: []
	W1227 20:14:59.436669  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:59.436683  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:59.436695  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:59.532932  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:59.532968  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:59.601543  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:59.593318   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.594069   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.595883   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.596441   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.597498   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:59.593318   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.594069   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.595883   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.596441   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.597498   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:59.601573  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:59.601587  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:59.630627  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:59.630653  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:59.691462  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:59.691537  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:59.736271  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:59.736311  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:59.763317  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:59.763349  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:59.845478  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:59.845512  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:59.877233  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:59.877259  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:59.894077  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:59.894108  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:02.425928  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:15:02.437025  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:15:02.437097  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:15:02.462847  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:02.462876  319301 cri.go:96] found id: ""
	I1227 20:15:02.462885  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:15:02.462941  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:02.466840  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:15:02.466915  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:15:02.493867  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:02.493889  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:02.493895  319301 cri.go:96] found id: ""
	I1227 20:15:02.493903  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:15:02.493986  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:02.497849  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:02.501391  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:15:02.501500  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:15:02.531735  319301 cri.go:96] found id: ""
	I1227 20:15:02.531761  319301 logs.go:282] 0 containers: []
	W1227 20:15:02.531771  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:15:02.531779  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:15:02.531858  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:15:02.557699  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:02.557723  319301 cri.go:96] found id: ""
	I1227 20:15:02.557732  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:15:02.557792  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:02.561785  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:15:02.561860  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:15:02.588584  319301 cri.go:96] found id: ""
	I1227 20:15:02.588611  319301 logs.go:282] 0 containers: []
	W1227 20:15:02.588620  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:15:02.588665  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:15:02.588727  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:15:02.626246  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:02.626270  319301 cri.go:96] found id: ""
	I1227 20:15:02.626279  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:15:02.626332  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:02.630342  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:15:02.630416  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:15:02.658875  319301 cri.go:96] found id: ""
	I1227 20:15:02.658899  319301 logs.go:282] 0 containers: []
	W1227 20:15:02.658908  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:15:02.658940  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:15:02.658959  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:15:02.760567  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:15:02.760609  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:15:02.779705  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:15:02.779737  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:15:02.864780  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:15:02.844552   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.845307   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.847070   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.847814   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.850808   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:15:02.844552   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.845307   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.847070   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.847814   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.850808   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:15:02.864807  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:15:02.864822  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:02.930564  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:15:02.930600  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:02.956647  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:15:02.956674  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:02.988569  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:15:02.988644  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:15:03.080368  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:15:03.080404  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:03.109214  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:15:03.109254  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:03.154097  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:15:03.154130  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:15:05.702871  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:15:05.713737  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:15:05.713808  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:15:05.747061  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:05.747087  319301 cri.go:96] found id: ""
	I1227 20:15:05.747097  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:15:05.747152  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:05.751069  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:15:05.751142  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:15:05.778241  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:05.778264  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:05.778269  319301 cri.go:96] found id: ""
	I1227 20:15:05.778276  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:15:05.778330  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:05.781970  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:05.785615  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:15:05.785684  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:15:05.811372  319301 cri.go:96] found id: ""
	I1227 20:15:05.811405  319301 logs.go:282] 0 containers: []
	W1227 20:15:05.811419  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:15:05.811426  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:15:05.811487  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:15:05.837308  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:05.837331  319301 cri.go:96] found id: ""
	I1227 20:15:05.837339  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:15:05.837394  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:05.841435  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:15:05.841563  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:15:05.872145  319301 cri.go:96] found id: ""
	I1227 20:15:05.872175  319301 logs.go:282] 0 containers: []
	W1227 20:15:05.872184  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:15:05.872191  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:15:05.872248  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:15:05.905843  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:05.905863  319301 cri.go:96] found id: ""
	I1227 20:15:05.905872  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:15:05.905928  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:05.909362  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:15:05.909433  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:15:05.937743  319301 cri.go:96] found id: ""
	I1227 20:15:05.937768  319301 logs.go:282] 0 containers: []
	W1227 20:15:05.937776  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:15:05.937789  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:15:05.937805  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:15:05.956337  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:15:05.956373  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:06.027819  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:15:06.027857  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:06.055387  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:15:06.055417  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:15:06.087848  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:15:06.087876  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:15:06.191189  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:15:06.191225  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:15:06.260486  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:15:06.252420   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.253150   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.254651   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.255097   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.256545   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:15:06.252420   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.253150   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.254651   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.255097   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.256545   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:15:06.260512  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:15:06.260527  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:06.289045  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:15:06.289074  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:06.340456  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:15:06.340493  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:06.367177  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:15:06.367209  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:15:08.948368  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:15:08.960093  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:15:08.960163  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:15:09.004464  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:09.004531  319301 cri.go:96] found id: ""
	I1227 20:15:09.004541  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:15:09.004627  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.008790  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:15:09.008905  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:15:09.041635  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:09.041705  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:09.041727  319301 cri.go:96] found id: ""
	I1227 20:15:09.041750  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:15:09.041834  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.046563  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.050558  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:15:09.050679  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:15:09.079147  319301 cri.go:96] found id: ""
	I1227 20:15:09.079218  319301 logs.go:282] 0 containers: []
	W1227 20:15:09.079241  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:15:09.079265  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:15:09.079350  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:15:09.115659  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:09.115728  319301 cri.go:96] found id: ""
	I1227 20:15:09.115749  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:15:09.115833  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.119927  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:15:09.120060  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:15:09.148832  319301 cri.go:96] found id: ""
	I1227 20:15:09.148905  319301 logs.go:282] 0 containers: []
	W1227 20:15:09.148927  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:15:09.148951  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:15:09.149036  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:15:09.193967  319301 cri.go:96] found id: "d4599a49838601138827173ae16d1700bf9c506a4f9611f8f2415da1ea387070"
	I1227 20:15:09.194039  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:09.194058  319301 cri.go:96] found id: ""
	I1227 20:15:09.194083  319301 logs.go:282] 2 containers: [d4599a49838601138827173ae16d1700bf9c506a4f9611f8f2415da1ea387070 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:15:09.194168  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.198186  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.202291  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:15:09.202369  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:15:09.233220  319301 cri.go:96] found id: ""
	I1227 20:15:09.233256  319301 logs.go:282] 0 containers: []
	W1227 20:15:09.233266  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:15:09.233275  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:15:09.233286  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:15:09.265208  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:15:09.265236  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:15:09.366491  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:15:09.366527  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:15:09.385049  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:15:09.385152  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:09.416669  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:15:09.416697  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:09.477821  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:15:09.477862  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:09.503656  319301 logs.go:123] Gathering logs for kube-controller-manager [d4599a49838601138827173ae16d1700bf9c506a4f9611f8f2415da1ea387070] ...
	I1227 20:15:09.503682  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4599a49838601138827173ae16d1700bf9c506a4f9611f8f2415da1ea387070"
	I1227 20:15:09.529517  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:15:09.529549  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:15:09.594024  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:15:09.583997   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.584731   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.586847   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.587585   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.589403   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:15:09.583997   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.584731   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.586847   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.587585   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.589403   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:15:09.594044  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:15:09.594113  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:09.641021  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:15:09.641054  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:09.671469  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:15:09.671497  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:15:12.247384  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:15:12.261411  319301 out.go:203] 
	W1227 20:15:12.264240  319301 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1227 20:15:12.264279  319301 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1227 20:15:12.264291  319301 out.go:285] * Related issues:
	* Related issues:
	W1227 20:15:12.264307  319301 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1227 20:15:12.264322  319301 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1227 20:15:12.272645  319301 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-arm64 -p ha-422549 node list --alsologtostderr -v 5" : exit status 105
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 node list --alsologtostderr -v 5
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-422549
helpers_test.go:244: (dbg) docker inspect ha-422549:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf",
	        "Created": "2025-12-27T20:03:01.682141141Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 319429,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:07:23.280905445Z",
	            "FinishedAt": "2025-12-27T20:07:22.683216546Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/hostname",
	        "HostsPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/hosts",
	        "LogPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf-json.log",
	        "Name": "/ha-422549",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-422549:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-422549",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf",
	                "LowerDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/merged",
	                "UpperDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/diff",
	                "WorkDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-422549",
	                "Source": "/var/lib/docker/volumes/ha-422549/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422549",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422549",
	                "name.minikube.sigs.k8s.io": "ha-422549",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "28e77342f2c4751026f399b040de05177304716ac6aab83b39b3d9c47cebffe7",
	            "SandboxKey": "/var/run/docker/netns/28e77342f2c4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422549": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:36:09:aa:37:bf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9521cb9225c5842f69a8435c5cf5485b75f9a8b2c68158742ff27c2be32f5951",
	                    "EndpointID": "a460c21f8bbd3e3cd9f593131304327baa8422b2d75f0ce1ac3c5c098867a970",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422549",
	                        "53fd780c3df5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-422549 -n ha-422549
helpers_test.go:253: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-422549 logs -n 25: (2.105308703s)
helpers_test.go:261: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-422549 cp ha-422549-m03:/home/docker/cp-test.txt ha-422549-m02:/home/docker/cp-test_ha-422549-m03_ha-422549-m02.txt               │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m02 sudo cat /home/docker/cp-test_ha-422549-m03_ha-422549-m02.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m03:/home/docker/cp-test.txt ha-422549-m04:/home/docker/cp-test_ha-422549-m03_ha-422549-m04.txt               │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test_ha-422549-m03_ha-422549-m04.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp testdata/cp-test.txt ha-422549-m04:/home/docker/cp-test.txt                                                             │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3848759327/001/cp-test_ha-422549-m04.txt │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549:/home/docker/cp-test_ha-422549-m04_ha-422549.txt                       │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549 sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549.txt                                                 │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549-m02:/home/docker/cp-test_ha-422549-m04_ha-422549-m02.txt               │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m02 sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549-m02.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549-m03:/home/docker/cp-test_ha-422549-m04_ha-422549-m03.txt               │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m03 sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549-m03.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ node    │ ha-422549 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ node    │ ha-422549 node start m02 --alsologtostderr -v 5                                                                                      │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ node    │ ha-422549 node list --alsologtostderr -v 5                                                                                           │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │                     │
	│ stop    │ ha-422549 stop --alsologtostderr -v 5                                                                                                │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:07 UTC │
	│ start   │ ha-422549 start --wait true --alsologtostderr -v 5                                                                                   │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:07 UTC │                     │
	│ node    │ ha-422549 node list --alsologtostderr -v 5                                                                                           │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:15 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:07:23
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:07:23.018829  319301 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:07:23.019045  319301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:07:23.019069  319301 out.go:374] Setting ErrFile to fd 2...
	I1227 20:07:23.019104  319301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:07:23.019417  319301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:07:23.019931  319301 out.go:368] Setting JSON to false
	I1227 20:07:23.020994  319301 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6595,"bootTime":1766859448,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:07:23.021172  319301 start.go:143] virtualization:  
	I1227 20:07:23.026478  319301 out.go:179] * [ha-422549] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:07:23.029624  319301 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:07:23.029657  319301 notify.go:221] Checking for updates...
	I1227 20:07:23.035732  319301 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:07:23.038626  319301 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:07:23.041521  319301 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:07:23.044303  319301 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:07:23.047245  319301 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:07:23.050815  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:07:23.050954  319301 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:07:23.074861  319301 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:07:23.074978  319301 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:07:23.134894  319301 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 20:07:23.1261821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:07:23.135004  319301 docker.go:319] overlay module found
	I1227 20:07:23.138113  319301 out.go:179] * Using the docker driver based on existing profile
	I1227 20:07:23.140925  319301 start.go:309] selected driver: docker
	I1227 20:07:23.140943  319301 start.go:928] validating driver "docker" against &{Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:07:23.141082  319301 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:07:23.141181  319301 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:07:23.197269  319301 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 20:07:23.188068839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:07:23.197711  319301 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:07:23.197745  319301 cni.go:84] Creating CNI manager for ""
	I1227 20:07:23.197797  319301 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1227 20:07:23.197857  319301 start.go:353] cluster config:
	{Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:07:23.202906  319301 out.go:179] * Starting "ha-422549" primary control-plane node in "ha-422549" cluster
	I1227 20:07:23.205659  319301 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:07:23.208577  319301 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:07:23.211352  319301 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:07:23.211401  319301 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:07:23.211416  319301 cache.go:65] Caching tarball of preloaded images
	I1227 20:07:23.211429  319301 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:07:23.211499  319301 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:07:23.211509  319301 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:07:23.211655  319301 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:07:23.229712  319301 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:07:23.229734  319301 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:07:23.229749  319301 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:07:23.229779  319301 start.go:360] acquireMachinesLock for ha-422549: {Name:mk939e8ee4c2bedc86cc6a99d76298e7b2a26ce2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:07:23.229835  319301 start.go:364] duration metric: took 35.657µs to acquireMachinesLock for "ha-422549"
	I1227 20:07:23.229869  319301 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:07:23.229878  319301 fix.go:54] fixHost starting: 
	I1227 20:07:23.230138  319301 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:07:23.246992  319301 fix.go:112] recreateIfNeeded on ha-422549: state=Stopped err=<nil>
	W1227 20:07:23.247025  319301 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:07:23.250226  319301 out.go:252] * Restarting existing docker container for "ha-422549" ...
	I1227 20:07:23.250324  319301 cli_runner.go:164] Run: docker start ha-422549
	I1227 20:07:23.503347  319301 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:07:23.526447  319301 kic.go:430] container "ha-422549" state is running.
	I1227 20:07:23.526916  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:07:23.555271  319301 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:07:23.555509  319301 machine.go:94] provisionDockerMachine start ...
	I1227 20:07:23.555569  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:23.577158  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:23.577524  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1227 20:07:23.577542  319301 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:07:23.578121  319301 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44738->127.0.0.1:33173: read: connection reset by peer
	I1227 20:07:26.720977  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549
	
	I1227 20:07:26.721006  319301 ubuntu.go:182] provisioning hostname "ha-422549"
	I1227 20:07:26.721067  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:26.738818  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:26.739131  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1227 20:07:26.739148  319301 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549 && echo "ha-422549" | sudo tee /etc/hostname
	I1227 20:07:26.886109  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549
	
	I1227 20:07:26.886195  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:26.903863  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:26.904173  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1227 20:07:26.904194  319301 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:07:27.041724  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:07:27.041750  319301 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:07:27.041786  319301 ubuntu.go:190] setting up certificates
	I1227 20:07:27.041803  319301 provision.go:84] configureAuth start
	I1227 20:07:27.041869  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:07:27.060364  319301 provision.go:143] copyHostCerts
	I1227 20:07:27.060422  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:07:27.060455  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:07:27.060473  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:07:27.060550  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:07:27.060645  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:07:27.060668  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:07:27.060679  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:07:27.060709  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:07:27.060761  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:07:27.060783  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:07:27.060791  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:07:27.060818  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:07:27.060870  319301 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549 san=[127.0.0.1 192.168.49.2 ha-422549 localhost minikube]
	I1227 20:07:27.239677  319301 provision.go:177] copyRemoteCerts
	I1227 20:07:27.239745  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:07:27.239800  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:27.259369  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:07:27.364829  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:07:27.364890  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:07:27.382288  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:07:27.382362  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1227 20:07:27.399154  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:07:27.399213  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:07:27.417099  319301 provision.go:87] duration metric: took 375.277706ms to configureAuth
	I1227 20:07:27.417133  319301 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:07:27.417387  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:07:27.417527  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:27.434441  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:27.434764  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1227 20:07:27.434789  319301 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:07:27.806912  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:07:27.806938  319301 machine.go:97] duration metric: took 4.251419469s to provisionDockerMachine
	I1227 20:07:27.806950  319301 start.go:293] postStartSetup for "ha-422549" (driver="docker")
	I1227 20:07:27.806961  319301 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:07:27.807018  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:07:27.807063  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:27.827185  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:07:27.924757  319301 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:07:27.927910  319301 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:07:27.927939  319301 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:07:27.927951  319301 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:07:27.928034  319301 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:07:27.928163  319301 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:07:27.928176  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:07:27.928319  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:07:27.935125  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:07:27.951297  319301 start.go:296] duration metric: took 144.328969ms for postStartSetup
	I1227 20:07:27.951425  319301 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:07:27.951489  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:27.968679  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:07:28.062963  319301 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:07:28.068245  319301 fix.go:56] duration metric: took 4.838360246s for fixHost
	I1227 20:07:28.068273  319301 start.go:83] releasing machines lock for "ha-422549", held for 4.838415218s
	I1227 20:07:28.068391  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:07:28.086189  319301 ssh_runner.go:195] Run: cat /version.json
	I1227 20:07:28.086242  319301 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:07:28.086251  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:28.086297  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:28.112515  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:07:28.119040  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:07:28.213229  319301 ssh_runner.go:195] Run: systemctl --version
	I1227 20:07:28.307265  319301 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:07:28.344982  319301 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:07:28.349307  319301 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:07:28.349416  319301 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:07:28.357039  319301 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:07:28.357061  319301 start.go:496] detecting cgroup driver to use...
	I1227 20:07:28.357091  319301 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:07:28.357187  319301 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:07:28.372341  319301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:07:28.385115  319301 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:07:28.385188  319301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:07:28.400803  319301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:07:28.413692  319301 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:07:28.520682  319301 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:07:28.638372  319301 docker.go:234] disabling docker service ...
	I1227 20:07:28.638476  319301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:07:28.652726  319301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:07:28.665221  319301 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:07:28.769753  319301 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:07:28.887106  319301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:07:28.901250  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:07:28.915594  319301 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:07:28.915656  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.923915  319301 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:07:28.924023  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.932251  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.940443  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.948974  319301 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:07:28.956576  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.964831  319301 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.973077  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.981210  319301 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:07:28.988289  319301 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:07:28.995419  319301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:07:29.102806  319301 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:07:29.272446  319301 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:07:29.272527  319301 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:07:29.276338  319301 start.go:574] Will wait 60s for crictl version
	I1227 20:07:29.276409  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:07:29.279905  319301 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:07:29.303871  319301 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:07:29.303984  319301 ssh_runner.go:195] Run: crio --version
	I1227 20:07:29.330697  319301 ssh_runner.go:195] Run: crio --version
	I1227 20:07:29.362339  319301 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:07:29.365125  319301 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:07:29.381233  319301 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:07:29.385291  319301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:07:29.396534  319301 kubeadm.go:884] updating cluster {Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:07:29.396713  319301 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:07:29.396766  319301 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:07:29.430374  319301 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:07:29.430399  319301 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:07:29.430457  319301 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:07:29.459783  319301 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:07:29.459805  319301 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:07:29.459813  319301 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I1227 20:07:29.459907  319301 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:07:29.459984  319301 ssh_runner.go:195] Run: crio config
	I1227 20:07:29.529648  319301 cni.go:84] Creating CNI manager for ""
	I1227 20:07:29.529684  319301 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1227 20:07:29.529702  319301 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:07:29.529745  319301 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422549 NodeName:ha-422549 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:07:29.529880  319301 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422549"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:07:29.529906  319301 kube-vip.go:115] generating kube-vip config ...
	I1227 20:07:29.529981  319301 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 20:07:29.541823  319301 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:07:29.541926  319301 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 20:07:29.541995  319301 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:07:29.549349  319301 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:07:29.549419  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1227 20:07:29.556490  319301 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1227 20:07:29.568355  319301 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:07:29.580790  319301 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1227 20:07:29.593175  319301 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 20:07:29.606173  319301 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:07:29.609837  319301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:07:29.619217  319301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:07:29.735123  319301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:07:29.750389  319301 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.2
	I1227 20:07:29.750412  319301 certs.go:195] generating shared ca certs ...
	I1227 20:07:29.750427  319301 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:29.750619  319301 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:07:29.750682  319301 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:07:29.750699  319301 certs.go:257] generating profile certs ...
	I1227 20:07:29.750812  319301 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key
	I1227 20:07:29.751056  319301 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.743f7ef3
	I1227 20:07:29.751077  319301 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt.743f7ef3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1227 20:07:30.216987  319301 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt.743f7ef3 ...
	I1227 20:07:30.217024  319301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt.743f7ef3: {Name:mk5110c0017b8f4cda34fa079f107b622b8f9c47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:30.217226  319301 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.743f7ef3 ...
	I1227 20:07:30.217243  319301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.743f7ef3: {Name:mkb171a8982d80a151baacbc9fe03fa941196fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:30.217342  319301 certs.go:382] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt.743f7ef3 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt
	I1227 20:07:30.217509  319301 certs.go:386] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.743f7ef3 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key
	I1227 20:07:30.217676  319301 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key
	I1227 20:07:30.217696  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:07:30.217721  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:07:30.217741  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:07:30.217759  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:07:30.217776  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:07:30.217799  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:07:30.217821  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:07:30.217837  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:07:30.217893  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:07:30.217940  319301 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:07:30.217953  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:07:30.217981  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:07:30.218009  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:07:30.218040  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:07:30.218095  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:07:30.218156  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:07:30.218174  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:07:30.218188  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:07:30.218745  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:07:30.239060  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:07:30.258056  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:07:30.279983  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:07:30.299163  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:07:30.317066  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:07:30.333792  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:07:30.363380  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:07:30.383880  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:07:30.402563  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:07:30.424158  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:07:30.441364  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:07:30.455028  319301 ssh_runner.go:195] Run: openssl version
	I1227 20:07:30.462193  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:07:30.476783  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:07:30.488736  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:07:30.492787  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:07:30.492869  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:07:30.601338  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:07:30.618710  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:07:30.629367  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:07:30.641908  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:07:30.646861  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:07:30.646946  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:07:30.713797  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:07:30.723031  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:07:30.735659  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:07:30.746061  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:07:30.750487  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:07:30.750578  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:07:30.818577  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:07:30.827800  319301 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:07:30.835007  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:07:30.906833  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:07:30.969599  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:07:31.044468  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:07:31.106453  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:07:31.155733  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:07:31.197366  319301 kubeadm.go:401] StartCluster: {Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:07:31.197537  319301 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:07:31.197613  319301 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:07:31.226634  319301 cri.go:96] found id: "c3f87ac29708d39b5580f953e8ccc765b36b830cf405bc7750b8afe798a15a77"
	I1227 20:07:31.226665  319301 cri.go:96] found id: "79f65bc2e1dbcf7ebe07acaf2143b45f059da3390e107fc3eb87595ccc5f920d"
	I1227 20:07:31.226671  319301 cri.go:96] found id: "dd811e752da4c2025246e605ecc1690aba8141353e20fb91cdad4468a1c059f9"
	I1227 20:07:31.226675  319301 cri.go:96] found id: "feeed30c26dbbb06391e6c43a6d6041af28ce218eaf23eec819dc38cda9444e8"
	I1227 20:07:31.226679  319301 cri.go:96] found id: "bbf24a80fc638071d98a1cc08ab823b436cc206cb456eac7a8be7958d11889db"
	I1227 20:07:31.226683  319301 cri.go:96] found id: ""
	I1227 20:07:31.226745  319301 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:07:31.244824  319301 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:07:31Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:07:31.244903  319301 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:07:31.257811  319301 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:07:31.257842  319301 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:07:31.257908  319301 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:07:31.270645  319301 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:07:31.271073  319301 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-422549" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:07:31.271185  319301 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-272475/kubeconfig needs updating (will repair): [kubeconfig missing "ha-422549" cluster setting kubeconfig missing "ha-422549" context setting]
	I1227 20:07:31.271518  319301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:31.272112  319301 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 20:07:31.272794  319301 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 20:07:31.272816  319301 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 20:07:31.272823  319301 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 20:07:31.272851  319301 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1227 20:07:31.272828  319301 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 20:07:31.272895  319301 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 20:07:31.272900  319301 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 20:07:31.273215  319301 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:07:31.284048  319301 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1227 20:07:31.284081  319301 kubeadm.go:602] duration metric: took 26.232251ms to restartPrimaryControlPlane
	I1227 20:07:31.284090  319301 kubeadm.go:403] duration metric: took 86.73489ms to StartCluster
	I1227 20:07:31.284107  319301 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:31.284175  319301 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:07:31.284780  319301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:31.284997  319301 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:07:31.285023  319301 start.go:242] waiting for startup goroutines ...
	I1227 20:07:31.285032  319301 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:07:31.285574  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:07:31.290925  319301 out.go:179] * Enabled addons: 
	I1227 20:07:31.294082  319301 addons.go:530] duration metric: took 9.037764ms for enable addons: enabled=[]
	I1227 20:07:31.294137  319301 start.go:247] waiting for cluster config update ...
	I1227 20:07:31.294152  319301 start.go:256] writing updated cluster config ...
	I1227 20:07:31.297568  319301 out.go:203] 
	I1227 20:07:31.300820  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:07:31.300937  319301 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:07:31.304320  319301 out.go:179] * Starting "ha-422549-m02" control-plane node in "ha-422549" cluster
	I1227 20:07:31.306983  319301 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:07:31.309971  319301 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:07:31.312773  319301 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:07:31.312796  319301 cache.go:65] Caching tarball of preloaded images
	I1227 20:07:31.312889  319301 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:07:31.312906  319301 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:07:31.313029  319301 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:07:31.313257  319301 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:07:31.349637  319301 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:07:31.349662  319301 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:07:31.349676  319301 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:07:31.349708  319301 start.go:360] acquireMachinesLock for ha-422549-m02: {Name:mk8fc7aa5d6c41749cc4b9db094e3fd243d8b868 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:07:31.349765  319301 start.go:364] duration metric: took 37.299µs to acquireMachinesLock for "ha-422549-m02"
	I1227 20:07:31.349791  319301 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:07:31.349796  319301 fix.go:54] fixHost starting: m02
	I1227 20:07:31.350055  319301 cli_runner.go:164] Run: docker container inspect ha-422549-m02 --format={{.State.Status}}
	I1227 20:07:31.391676  319301 fix.go:112] recreateIfNeeded on ha-422549-m02: state=Stopped err=<nil>
	W1227 20:07:31.391706  319301 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:07:31.394953  319301 out.go:252] * Restarting existing docker container for "ha-422549-m02" ...
	I1227 20:07:31.395043  319301 cli_runner.go:164] Run: docker start ha-422549-m02
	I1227 20:07:31.777922  319301 cli_runner.go:164] Run: docker container inspect ha-422549-m02 --format={{.State.Status}}
	I1227 20:07:31.805184  319301 kic.go:430] container "ha-422549-m02" state is running.
	I1227 20:07:31.805591  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:07:31.841697  319301 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:07:31.841951  319301 machine.go:94] provisionDockerMachine start ...
	I1227 20:07:31.842022  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:31.865663  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:31.865982  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1227 20:07:31.865998  319301 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:07:31.866584  319301 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58412->127.0.0.1:33178: read: connection reset by peer
	I1227 20:07:35.045099  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m02
	
	I1227 20:07:35.045161  319301 ubuntu.go:182] provisioning hostname "ha-422549-m02"
	I1227 20:07:35.045260  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:35.074417  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:35.074732  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1227 20:07:35.074750  319301 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549-m02 && echo "ha-422549-m02" | sudo tee /etc/hostname
	I1227 20:07:35.272951  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m02
	
	I1227 20:07:35.273095  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:35.310855  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:35.311167  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1227 20:07:35.311187  319301 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:07:35.489398  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:07:35.489483  319301 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:07:35.489515  319301 ubuntu.go:190] setting up certificates
	I1227 20:07:35.489552  319301 provision.go:84] configureAuth start
	I1227 20:07:35.489651  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:07:35.519140  319301 provision.go:143] copyHostCerts
	I1227 20:07:35.519180  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:07:35.519212  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:07:35.519219  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:07:35.519305  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:07:35.519384  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:07:35.519400  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:07:35.519405  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:07:35.519428  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:07:35.519467  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:07:35.519482  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:07:35.519486  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:07:35.519508  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:07:35.519552  319301 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549-m02 san=[127.0.0.1 192.168.49.3 ha-422549-m02 localhost minikube]
	I1227 20:07:35.673804  319301 provision.go:177] copyRemoteCerts
	I1227 20:07:35.676274  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:07:35.676362  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:35.700203  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:07:35.810686  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:07:35.810802  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 20:07:35.827198  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:07:35.827254  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:07:35.847940  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:07:35.848040  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:07:35.870095  319301 provision.go:87] duration metric: took 380.509887ms to configureAuth
	I1227 20:07:35.870124  319301 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:07:35.870422  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:07:35.870563  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:35.893611  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:35.893918  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1227 20:07:35.893932  319301 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:07:36.282435  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:07:36.282459  319301 machine.go:97] duration metric: took 4.440490595s to provisionDockerMachine
	I1227 20:07:36.282470  319301 start.go:293] postStartSetup for "ha-422549-m02" (driver="docker")
	I1227 20:07:36.282505  319301 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:07:36.282595  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:07:36.282666  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:36.301003  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:07:36.402628  319301 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:07:36.406068  319301 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:07:36.406097  319301 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:07:36.406108  319301 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:07:36.406247  319301 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:07:36.406355  319301 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:07:36.406371  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:07:36.406502  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:07:36.414126  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:07:36.431291  319301 start.go:296] duration metric: took 148.805898ms for postStartSetup
	I1227 20:07:36.431373  319301 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:07:36.431417  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:36.449358  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:07:36.546713  319301 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:07:36.551629  319301 fix.go:56] duration metric: took 5.201823785s for fixHost
	I1227 20:07:36.551655  319301 start.go:83] releasing machines lock for "ha-422549-m02", held for 5.20187627s
	I1227 20:07:36.551729  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:07:36.571695  319301 out.go:179] * Found network options:
	I1227 20:07:36.574736  319301 out.go:179]   - NO_PROXY=192.168.49.2
	W1227 20:07:36.577654  319301 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:07:36.577694  319301 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 20:07:36.577781  319301 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:07:36.577827  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:36.578074  319301 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:07:36.578134  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:36.598248  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:07:36.598898  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:07:36.873888  319301 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:07:36.879823  319301 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:07:36.879937  319301 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:07:36.899888  319301 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:07:36.899953  319301 start.go:496] detecting cgroup driver to use...
	I1227 20:07:36.899997  319301 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:07:36.900076  319301 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:07:36.928970  319301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:07:36.947727  319301 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:07:36.947845  319301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:07:36.967863  319301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:07:36.998332  319301 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:07:37.167619  319301 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:07:37.326628  319301 docker.go:234] disabling docker service ...
	I1227 20:07:37.326748  319301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:07:37.341981  319301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:07:37.354777  319301 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:07:37.613409  319301 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:07:37.870750  319301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:07:37.886152  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:07:37.906254  319301 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:07:37.906377  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.926031  319301 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:07:37.926143  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.937485  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.946425  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.958890  319301 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:07:37.968858  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.978269  319301 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.986277  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.995011  319301 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:07:38.002468  319301 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:07:38.010027  319301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:07:38.207437  319301 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:09:08.647737  319301 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.440260784s)
	I1227 20:09:08.647767  319301 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:09:08.647821  319301 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:09:08.651981  319301 start.go:574] Will wait 60s for crictl version
	I1227 20:09:08.652048  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:09:08.655690  319301 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:09:08.681479  319301 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:09:08.681565  319301 ssh_runner.go:195] Run: crio --version
	I1227 20:09:08.713332  319301 ssh_runner.go:195] Run: crio --version
	I1227 20:09:08.746336  319301 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:09:08.749205  319301 out.go:179]   - env NO_PROXY=192.168.49.2
	I1227 20:09:08.752182  319301 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:09:08.768090  319301 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:09:08.771937  319301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:09:08.781622  319301 mustload.go:66] Loading cluster: ha-422549
	I1227 20:09:08.781869  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:09:08.782144  319301 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:09:08.798634  319301 host.go:66] Checking if "ha-422549" exists ...
	I1227 20:09:08.798913  319301 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.3
	I1227 20:09:08.798926  319301 certs.go:195] generating shared ca certs ...
	I1227 20:09:08.798941  319301 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:09:08.799067  319301 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:09:08.799116  319301 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:09:08.799129  319301 certs.go:257] generating profile certs ...
	I1227 20:09:08.799210  319301 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key
	I1227 20:09:08.799280  319301 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.982843aa
	I1227 20:09:08.799324  319301 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key
	I1227 20:09:08.799337  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:09:08.799350  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:09:08.799367  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:09:08.799386  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:09:08.799406  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:09:08.799422  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:09:08.799438  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:09:08.799453  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:09:08.799510  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:09:08.799546  319301 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:09:08.799559  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:09:08.799588  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:09:08.799617  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:09:08.799646  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:09:08.799694  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:09:08.799727  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:09:08.799744  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:09:08.799758  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:09:08.799822  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:09:08.817939  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:09:08.909783  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1227 20:09:08.913788  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1227 20:09:08.922116  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1227 20:09:08.925553  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1227 20:09:08.933735  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1227 20:09:08.937584  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1227 20:09:08.946742  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1227 20:09:08.951033  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1227 20:09:08.959969  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1227 20:09:08.963648  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1227 20:09:08.971803  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1227 20:09:08.975349  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1227 20:09:08.983445  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:09:09.001559  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:09:09.020775  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:09:09.041958  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:09:09.059931  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:09:09.076796  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:09:09.095447  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:09:09.113037  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:09:09.130903  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:09:09.148555  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:09:09.167075  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:09:09.184251  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1227 20:09:09.197053  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1227 20:09:09.209869  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1227 20:09:09.223329  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1227 20:09:09.236109  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1227 20:09:09.249524  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1227 20:09:09.262558  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (728 bytes)
	I1227 20:09:09.278766  319301 ssh_runner.go:195] Run: openssl version
	I1227 20:09:09.288173  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:09:09.303263  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:09:09.312839  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:09:09.317343  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:09:09.317435  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:09:09.358946  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:09:09.366603  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:09:09.374144  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:09:09.381566  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:09:09.385396  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:09:09.385483  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:09:09.427186  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:09:09.435033  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:09:09.442740  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:09:09.450736  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:09:09.455313  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:09:09.455406  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:09:09.506456  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:09:09.515191  319301 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:09:09.519143  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:09:09.560830  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:09:09.601733  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:09:09.642802  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:09:09.683557  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:09:09.724343  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:09:09.764937  319301 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.35.0 crio true true} ...
	I1227 20:09:09.765044  319301 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:09:09.765076  319301 kube-vip.go:115] generating kube-vip config ...
	I1227 20:09:09.765126  319301 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 20:09:09.777907  319301 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:09:09.778008  319301 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 20:09:09.778101  319301 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:09:09.785542  319301 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:09:09.785669  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1227 20:09:09.793814  319301 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 20:09:09.808509  319301 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:09:09.822210  319301 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 20:09:09.836025  319301 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:09:09.840416  319301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:09:09.851735  319301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:09:09.987416  319301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:09:10.000958  319301 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:09:10.001514  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:09:10.006801  319301 out.go:179] * Verifying Kubernetes components...
	I1227 20:09:10.009655  319301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:09:10.156826  319301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:09:10.171179  319301 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1227 20:09:10.171261  319301 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1227 20:09:10.171542  319301 node_ready.go:35] waiting up to 6m0s for node "ha-422549-m02" to be "Ready" ...
	I1227 20:09:13.107692  319301 node_ready.go:49] node "ha-422549-m02" is "Ready"
	I1227 20:09:13.107720  319301 node_ready.go:38] duration metric: took 2.936159281s for node "ha-422549-m02" to be "Ready" ...
	I1227 20:09:13.107734  319301 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:09:13.107789  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:13.607926  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:14.107987  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:14.607959  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:15.108981  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:15.607952  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:16.108673  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:16.608170  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:17.108757  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:17.608081  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:18.108738  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:18.608607  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:19.108699  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:19.608389  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:20.107908  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:20.608001  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:21.108548  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:21.608334  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:22.108180  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:22.607875  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:23.108675  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:23.608625  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:24.108180  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:24.608668  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:25.108754  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:25.607950  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:26.107930  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:26.607944  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:27.108744  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:27.608613  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:28.108398  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:28.608347  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:29.108513  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:29.607943  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:30.108298  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:30.607986  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:31.108862  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:31.608852  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:32.108838  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:32.608448  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:33.108526  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:33.608595  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:34.108250  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:34.607930  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:35.107952  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:35.608214  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:36.108509  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:36.608114  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:37.108454  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:37.607937  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:38.108594  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:38.607928  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:39.107995  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:39.608876  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:40.107937  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:40.607935  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:41.108437  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:41.607967  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:42.110329  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:42.608527  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:43.108197  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:43.608003  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:44.108494  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:44.608788  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:45.108779  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:45.608786  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:46.108080  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:46.608527  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:47.108485  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:47.608412  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:48.108174  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:48.608559  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:49.108719  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:49.608778  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:50.108396  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:50.608188  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:51.108854  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:51.607920  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:52.108260  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:52.607897  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:53.108165  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:53.608820  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:54.107921  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:54.608807  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:55.107966  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:55.608683  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:56.108704  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:56.608641  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:57.107949  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:57.608891  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:58.107911  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:58.607913  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:59.108124  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:59.608080  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:00.126668  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:00.607936  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:01.107972  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:01.607964  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:02.108918  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:02.608274  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:03.108889  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:03.607948  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:04.108838  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:04.608617  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:05.108707  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:05.608552  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:06.108350  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:06.607927  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:07.108601  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:07.607942  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:08.108292  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:08.607954  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:09.108836  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:09.608829  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:10.108562  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:10.108721  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:10.138615  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:10.138637  319301 cri.go:96] found id: ""
	I1227 20:10:10.138646  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:10.138711  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:10.143115  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:10.143189  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:10.173558  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:10.173579  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:10.173584  319301 cri.go:96] found id: ""
	I1227 20:10:10.173592  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:10.173653  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:10.178008  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:10.182191  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:10.182272  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:10.220643  319301 cri.go:96] found id: ""
	I1227 20:10:10.220668  319301 logs.go:282] 0 containers: []
	W1227 20:10:10.220677  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:10.220684  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:10.220746  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:10.250139  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:10.250162  319301 cri.go:96] found id: ""
	I1227 20:10:10.250170  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:10.250228  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:10.253966  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:10.254039  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:10.290311  319301 cri.go:96] found id: ""
	I1227 20:10:10.290334  319301 logs.go:282] 0 containers: []
	W1227 20:10:10.290343  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:10.290349  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:10.290422  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:10.319925  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:10.319948  319301 cri.go:96] found id: ""
	I1227 20:10:10.319974  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:10.320031  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:10.323821  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:10.323902  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:10.352069  319301 cri.go:96] found id: ""
	I1227 20:10:10.352091  319301 logs.go:282] 0 containers: []
	W1227 20:10:10.352100  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:10.352115  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:10.352127  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:10.451345  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:10.451385  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:10.469929  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:10.469961  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:10.875866  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:10.868032    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.868914    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.869711    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.870583    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.872332    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:10.868032    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.868914    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.869711    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.870583    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.872332    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:10.875894  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:10.875909  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:10.936407  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:10.936442  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:10.983671  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:10.983707  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:11.017260  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:11.017294  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:11.052563  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:11.052594  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:11.130184  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:11.130222  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:11.162524  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:11.162557  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:13.706075  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:13.716624  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:13.716698  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:13.747368  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:13.747388  319301 cri.go:96] found id: ""
	I1227 20:10:13.747396  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:13.747456  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:13.751096  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:13.751188  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:13.777717  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:13.777790  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:13.777802  319301 cri.go:96] found id: ""
	I1227 20:10:13.777811  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:13.777878  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:13.781548  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:13.785083  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:13.785193  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:13.811036  319301 cri.go:96] found id: ""
	I1227 20:10:13.811063  319301 logs.go:282] 0 containers: []
	W1227 20:10:13.811072  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:13.811079  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:13.811137  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:13.837822  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:13.837845  319301 cri.go:96] found id: ""
	I1227 20:10:13.837854  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:13.837911  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:13.841739  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:13.841856  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:13.868264  319301 cri.go:96] found id: ""
	I1227 20:10:13.868341  319301 logs.go:282] 0 containers: []
	W1227 20:10:13.868364  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:13.868387  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:13.868471  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:13.894511  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:13.894535  319301 cri.go:96] found id: ""
	I1227 20:10:13.894543  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:13.894621  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:13.898655  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:13.898764  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:13.924022  319301 cri.go:96] found id: ""
	I1227 20:10:13.924047  319301 logs.go:282] 0 containers: []
	W1227 20:10:13.924062  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:13.924077  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:13.924089  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:13.956536  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:13.956567  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:14.057854  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:14.057894  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:14.139219  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:14.129809    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.130897    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.132418    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.132833    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.134384    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:14.129809    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.130897    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.132418    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.132833    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.134384    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:14.139251  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:14.139265  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:14.182716  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:14.182750  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:14.208224  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:14.208301  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:14.225984  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:14.226016  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:14.256249  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:14.256314  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:14.301058  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:14.301201  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:14.329017  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:14.329046  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:16.906959  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:16.917912  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:16.917986  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:16.947235  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:16.947299  319301 cri.go:96] found id: ""
	I1227 20:10:16.947322  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:16.947404  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:16.951076  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:16.951204  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:16.984938  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:16.984962  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:16.984968  319301 cri.go:96] found id: ""
	I1227 20:10:16.984976  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:16.985053  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:16.988800  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:16.992512  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:16.992592  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:17.026764  319301 cri.go:96] found id: ""
	I1227 20:10:17.026789  319301 logs.go:282] 0 containers: []
	W1227 20:10:17.026798  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:17.026804  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:17.026875  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:17.053717  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:17.053741  319301 cri.go:96] found id: ""
	I1227 20:10:17.053749  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:17.053803  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:17.057601  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:17.057691  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:17.088432  319301 cri.go:96] found id: ""
	I1227 20:10:17.088455  319301 logs.go:282] 0 containers: []
	W1227 20:10:17.088464  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:17.088470  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:17.088529  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:17.115961  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:17.115985  319301 cri.go:96] found id: ""
	I1227 20:10:17.115995  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:17.116046  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:17.119890  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:17.119963  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:17.148631  319301 cri.go:96] found id: ""
	I1227 20:10:17.148654  319301 logs.go:282] 0 containers: []
	W1227 20:10:17.148663  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:17.148678  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:17.148694  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:17.240100  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:17.240138  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:17.259693  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:17.259725  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:17.291635  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:17.291666  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:17.368588  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:17.368624  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:17.407623  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:17.407652  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:17.475650  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:17.467352    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.467760    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.469497    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.470032    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.471718    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:17.467352    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.467760    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.469497    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.470032    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.471718    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:17.475719  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:17.475739  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:17.516294  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:17.516328  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:17.559509  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:17.559544  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:17.587296  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:17.587332  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:20.115472  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:20.126778  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:20.126847  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:20.153825  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:20.153850  319301 cri.go:96] found id: ""
	I1227 20:10:20.153859  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:20.153919  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:20.157682  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:20.157759  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:20.189317  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:20.189386  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:20.189420  319301 cri.go:96] found id: ""
	I1227 20:10:20.189493  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:20.189582  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:20.193669  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:20.197374  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:20.197473  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:20.237542  319301 cri.go:96] found id: ""
	I1227 20:10:20.237570  319301 logs.go:282] 0 containers: []
	W1227 20:10:20.237579  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:20.237585  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:20.237643  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:20.274313  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:20.274381  319301 cri.go:96] found id: ""
	I1227 20:10:20.274417  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:20.274509  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:20.279651  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:20.279718  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:20.306525  319301 cri.go:96] found id: ""
	I1227 20:10:20.306586  319301 logs.go:282] 0 containers: []
	W1227 20:10:20.306610  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:20.306636  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:20.306707  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:20.333808  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:20.333829  319301 cri.go:96] found id: ""
	I1227 20:10:20.333837  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:20.333927  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:20.337575  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:20.337677  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:20.372581  319301 cri.go:96] found id: ""
	I1227 20:10:20.372607  319301 logs.go:282] 0 containers: []
	W1227 20:10:20.372621  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:20.372636  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:20.372647  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:20.467758  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:20.467794  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:20.486495  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:20.486527  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:20.553188  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:20.545238    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.545758    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.547330    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.548070    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.549570    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:20.545238    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.545758    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.547330    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.548070    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.549570    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:20.553253  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:20.553282  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:20.580345  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:20.580374  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:20.626310  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:20.626345  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:20.670432  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:20.670467  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:20.696170  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:20.696199  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:20.730948  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:20.730976  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:20.805291  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:20.805325  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:23.351696  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:23.362369  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:23.362478  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:23.391572  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:23.391649  319301 cri.go:96] found id: ""
	I1227 20:10:23.391664  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:23.391739  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:23.395547  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:23.395671  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:23.422118  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:23.422141  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:23.422147  319301 cri.go:96] found id: ""
	I1227 20:10:23.422155  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:23.422235  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:23.426008  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:23.429336  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:23.429411  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:23.459272  319301 cri.go:96] found id: ""
	I1227 20:10:23.459299  319301 logs.go:282] 0 containers: []
	W1227 20:10:23.459308  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:23.459316  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:23.459398  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:23.484648  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:23.484671  319301 cri.go:96] found id: ""
	I1227 20:10:23.484679  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:23.484755  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:23.488422  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:23.488501  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:23.512953  319301 cri.go:96] found id: ""
	I1227 20:10:23.512978  319301 logs.go:282] 0 containers: []
	W1227 20:10:23.512987  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:23.512994  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:23.513049  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:23.538866  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:23.538889  319301 cri.go:96] found id: ""
	I1227 20:10:23.538898  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:23.538952  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:23.542487  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:23.542556  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:23.568959  319301 cri.go:96] found id: ""
	I1227 20:10:23.568985  319301 logs.go:282] 0 containers: []
	W1227 20:10:23.568994  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:23.569010  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:23.569023  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:23.614313  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:23.614346  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:23.639847  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:23.639875  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:23.671907  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:23.671936  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:23.702365  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:23.702394  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:23.783203  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:23.783246  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:23.884915  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:23.884948  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:23.902305  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:23.902337  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:23.970687  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:23.961560    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.962112    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.963576    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.964060    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.965635    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:23.961560    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.962112    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.963576    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.964060    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.965635    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:23.970722  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:23.970735  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:24.004792  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:24.004819  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:26.564703  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:26.575059  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:26.575143  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:26.604294  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:26.604317  319301 cri.go:96] found id: ""
	I1227 20:10:26.604326  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:26.604381  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:26.608875  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:26.608942  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:26.634574  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:26.634595  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:26.634600  319301 cri.go:96] found id: ""
	I1227 20:10:26.634607  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:26.634660  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:26.638317  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:26.641718  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:26.641787  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:26.670771  319301 cri.go:96] found id: ""
	I1227 20:10:26.670793  319301 logs.go:282] 0 containers: []
	W1227 20:10:26.670802  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:26.670808  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:26.670867  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:26.697344  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:26.697376  319301 cri.go:96] found id: ""
	I1227 20:10:26.697386  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:26.697491  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:26.701237  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:26.701344  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:26.726058  319301 cri.go:96] found id: ""
	I1227 20:10:26.726125  319301 logs.go:282] 0 containers: []
	W1227 20:10:26.726140  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:26.726147  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:26.726209  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:26.752574  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:26.752594  319301 cri.go:96] found id: ""
	I1227 20:10:26.752602  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:26.752658  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:26.756386  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:26.756457  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:26.786442  319301 cri.go:96] found id: ""
	I1227 20:10:26.786465  319301 logs.go:282] 0 containers: []
	W1227 20:10:26.786474  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:26.786488  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:26.786500  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:26.814367  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:26.814441  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:26.839989  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:26.840061  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:26.876712  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:26.876796  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:26.918742  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:26.918784  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:26.961668  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:26.961699  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:26.994123  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:26.994151  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:27.085553  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:27.085590  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:27.186397  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:27.186433  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:27.204121  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:27.204153  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:27.273016  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:27.262702    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.263577    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.265227    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.266801    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.267439    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:27.262702    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.263577    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.265227    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.266801    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.267439    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:29.773264  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:29.783744  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:29.783817  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:29.813744  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:29.813806  319301 cri.go:96] found id: ""
	I1227 20:10:29.813829  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:29.813919  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:29.818669  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:29.818786  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:29.844784  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:29.844802  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:29.844806  319301 cri.go:96] found id: ""
	I1227 20:10:29.844814  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:29.844868  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:29.848603  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:29.852078  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:29.852143  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:29.878788  319301 cri.go:96] found id: ""
	I1227 20:10:29.878814  319301 logs.go:282] 0 containers: []
	W1227 20:10:29.878823  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:29.878830  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:29.878890  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:29.908178  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:29.908200  319301 cri.go:96] found id: ""
	I1227 20:10:29.908209  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:29.908264  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:29.911793  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:29.911884  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:29.952724  319301 cri.go:96] found id: ""
	I1227 20:10:29.952749  319301 logs.go:282] 0 containers: []
	W1227 20:10:29.952759  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:29.952765  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:29.952855  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:30.008208  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:30.008289  319301 cri.go:96] found id: ""
	I1227 20:10:30.008312  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:30.008390  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:30.012672  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:30.012766  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:30.063201  319301 cri.go:96] found id: ""
	I1227 20:10:30.063273  319301 logs.go:282] 0 containers: []
	W1227 20:10:30.063297  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:30.063334  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:30.063369  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:30.152059  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:30.152097  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:30.188985  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:30.189011  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:30.288999  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:30.289079  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:30.307734  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:30.307764  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:30.354973  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:30.355008  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:30.425745  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:30.417740    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.418295    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.419807    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.420357    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.421985    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:30.417740    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.418295    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.419807    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.420357    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.421985    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:30.425773  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:30.425789  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:30.454739  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:30.454771  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:30.511002  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:30.511040  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:30.537495  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:30.537526  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:33.065805  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:33.076295  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:33.076418  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:33.103323  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:33.103346  319301 cri.go:96] found id: ""
	I1227 20:10:33.103356  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:33.103410  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:33.107007  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:33.107081  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:33.133167  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:33.133190  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:33.133195  319301 cri.go:96] found id: ""
	I1227 20:10:33.133203  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:33.133264  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:33.137298  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:33.141081  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:33.141152  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:33.167830  319301 cri.go:96] found id: ""
	I1227 20:10:33.167854  319301 logs.go:282] 0 containers: []
	W1227 20:10:33.167862  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:33.167869  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:33.167929  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:33.196531  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:33.196555  319301 cri.go:96] found id: ""
	I1227 20:10:33.196564  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:33.196621  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:33.200165  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:33.200267  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:33.226904  319301 cri.go:96] found id: ""
	I1227 20:10:33.226933  319301 logs.go:282] 0 containers: []
	W1227 20:10:33.226943  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:33.226950  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:33.227009  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:33.254111  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:33.254132  319301 cri.go:96] found id: ""
	I1227 20:10:33.254141  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:33.254197  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:33.258995  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:33.259128  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:33.285296  319301 cri.go:96] found id: ""
	I1227 20:10:33.285320  319301 logs.go:282] 0 containers: []
	W1227 20:10:33.285330  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:33.285350  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:33.285363  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:33.379312  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:33.379349  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:33.397669  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:33.397703  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:33.475423  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:33.464091    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.464710    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.467091    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.469890    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.471637    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:33.464091    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.464710    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.467091    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.469890    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.471637    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:33.475445  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:33.475462  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:33.505362  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:33.505391  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:33.549322  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:33.549353  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:33.592755  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:33.592789  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:33.625076  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:33.625105  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:33.676663  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:33.676692  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:33.703598  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:33.703627  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:36.283392  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:36.293854  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:36.293938  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:36.321425  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:36.321524  319301 cri.go:96] found id: ""
	I1227 20:10:36.321538  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:36.321604  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:36.325322  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:36.325393  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:36.354160  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:36.354182  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:36.354187  319301 cri.go:96] found id: ""
	I1227 20:10:36.354194  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:36.354250  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:36.357942  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:36.361261  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:36.361336  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:36.387328  319301 cri.go:96] found id: ""
	I1227 20:10:36.387356  319301 logs.go:282] 0 containers: []
	W1227 20:10:36.387366  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:36.387373  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:36.387431  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:36.418785  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:36.418807  319301 cri.go:96] found id: ""
	I1227 20:10:36.418815  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:36.418871  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:36.422631  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:36.422709  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:36.452773  319301 cri.go:96] found id: ""
	I1227 20:10:36.452799  319301 logs.go:282] 0 containers: []
	W1227 20:10:36.452807  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:36.452814  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:36.452873  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:36.478409  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:36.478432  319301 cri.go:96] found id: ""
	I1227 20:10:36.478440  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:36.478515  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:36.482226  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:36.482329  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:36.510113  319301 cri.go:96] found id: ""
	I1227 20:10:36.510139  319301 logs.go:282] 0 containers: []
	W1227 20:10:36.510148  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:36.510162  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:36.510206  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:36.528485  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:36.528518  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:36.596104  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:36.586542    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.587371    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.589128    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.589804    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.591834    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:36.586542    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.587371    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.589128    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.589804    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.591834    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:36.596128  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:36.596153  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:36.656568  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:36.656646  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:36.685002  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:36.685040  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:36.719044  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:36.719072  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:36.815628  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:36.815664  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:36.845372  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:36.845407  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:36.892923  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:36.892962  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:36.920168  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:36.920205  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:39.498228  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:39.509127  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:39.509200  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:39.535429  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:39.535450  319301 cri.go:96] found id: ""
	I1227 20:10:39.535458  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:39.535511  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.539036  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:39.539115  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:39.565370  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:39.565395  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:39.565401  319301 cri.go:96] found id: ""
	I1227 20:10:39.565411  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:39.565505  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.569317  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.572838  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:39.572913  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:39.600208  319301 cri.go:96] found id: ""
	I1227 20:10:39.600233  319301 logs.go:282] 0 containers: []
	W1227 20:10:39.600243  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:39.600249  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:39.600359  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:39.627924  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:39.627947  319301 cri.go:96] found id: ""
	I1227 20:10:39.627955  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:39.628038  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.631825  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:39.631929  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:39.670875  319301 cri.go:96] found id: ""
	I1227 20:10:39.670898  319301 logs.go:282] 0 containers: []
	W1227 20:10:39.670907  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:39.670949  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:39.671032  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:39.698935  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:39.698963  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:39.698968  319301 cri.go:96] found id: ""
	I1227 20:10:39.698976  319301 logs.go:282] 2 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:39.699057  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.702755  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.706280  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:39.706367  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:39.732144  319301 cri.go:96] found id: ""
	I1227 20:10:39.732171  319301 logs.go:282] 0 containers: []
	W1227 20:10:39.732192  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:39.732202  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:39.732218  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:39.833062  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:39.833097  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:39.851039  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:39.851169  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:39.936210  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:39.936253  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:40.017614  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:40.018998  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:40.077844  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:40.077881  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:40.191560  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:40.191604  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:40.229430  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:40.229483  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:40.316177  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:40.307077    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.308580    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.309399    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.310789    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.312661    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:40.307077    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.308580    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.309399    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.310789    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.312661    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:40.316202  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:40.316215  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:40.351544  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:40.351584  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:40.379852  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:40.379880  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:42.911718  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:42.922519  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:42.922590  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:42.949680  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:42.949705  319301 cri.go:96] found id: ""
	I1227 20:10:42.949714  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:42.949773  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:42.953773  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:42.953858  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:42.986307  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:42.986333  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:42.986340  319301 cri.go:96] found id: ""
	I1227 20:10:42.986347  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:42.986401  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:42.989939  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:42.993412  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:42.993511  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:43.027198  319301 cri.go:96] found id: ""
	I1227 20:10:43.027224  319301 logs.go:282] 0 containers: []
	W1227 20:10:43.027244  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:43.027251  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:43.027314  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:43.054716  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:43.054739  319301 cri.go:96] found id: ""
	I1227 20:10:43.054748  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:43.054803  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:43.059284  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:43.059357  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:43.093962  319301 cri.go:96] found id: ""
	I1227 20:10:43.093986  319301 logs.go:282] 0 containers: []
	W1227 20:10:43.093995  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:43.094002  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:43.094060  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:43.122219  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:43.122257  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:43.122263  319301 cri.go:96] found id: ""
	I1227 20:10:43.122270  319301 logs.go:282] 2 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:43.122337  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:43.126232  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:43.129862  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:43.129978  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:43.156857  319301 cri.go:96] found id: ""
	I1227 20:10:43.156882  319301 logs.go:282] 0 containers: []
	W1227 20:10:43.156891  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:43.156901  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:43.156914  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:43.174975  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:43.175005  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:43.219964  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:43.220004  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:43.245562  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:43.245591  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:43.276688  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:43.276770  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:43.358338  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:43.358380  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:43.402206  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:43.402234  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:43.499249  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:43.499289  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:43.576572  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:43.568454    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.569067    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.570849    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.571386    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.572871    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:43.568454    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.569067    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.570849    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.571386    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.572871    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:43.576591  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:43.576605  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:43.604599  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:43.604686  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:43.650961  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:43.651038  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:46.181580  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:46.192165  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:46.192233  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:46.218480  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:46.218500  319301 cri.go:96] found id: ""
	I1227 20:10:46.218509  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:46.218563  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.222189  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:46.222263  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:46.253302  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:46.253327  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:46.253332  319301 cri.go:96] found id: ""
	I1227 20:10:46.253340  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:46.253398  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.257309  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.260898  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:46.260974  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:46.289145  319301 cri.go:96] found id: ""
	I1227 20:10:46.289218  319301 logs.go:282] 0 containers: []
	W1227 20:10:46.289241  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:46.289262  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:46.289352  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:46.318927  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:46.318948  319301 cri.go:96] found id: ""
	I1227 20:10:46.318956  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:46.319015  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.322605  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:46.322674  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:46.354035  319301 cri.go:96] found id: ""
	I1227 20:10:46.354061  319301 logs.go:282] 0 containers: []
	W1227 20:10:46.354071  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:46.354077  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:46.354168  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:46.384710  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:46.384734  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:46.384740  319301 cri.go:96] found id: ""
	I1227 20:10:46.384748  319301 logs.go:282] 2 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:46.384803  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.388496  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.392532  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:46.392611  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:46.421588  319301 cri.go:96] found id: ""
	I1227 20:10:46.421664  319301 logs.go:282] 0 containers: []
	W1227 20:10:46.421686  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:46.421709  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:46.421746  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:46.439228  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:46.439330  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:46.484770  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:46.484806  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:46.519247  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:46.519273  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:46.597066  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:46.597101  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:46.634009  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:46.634040  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:46.701472  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:46.693690    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.694466    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.695987    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.696422    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.697877    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:46.693690    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.694466    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.695987    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.696422    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.697877    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:46.701496  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:46.701512  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:46.729296  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:46.729326  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:46.774639  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:46.774678  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:46.799969  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:46.800005  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:46.826163  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:46.826192  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:49.429141  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:49.439610  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:49.439705  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:49.470260  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:49.470283  319301 cri.go:96] found id: ""
	I1227 20:10:49.470292  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:49.470350  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.474256  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:49.474343  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:49.501740  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:49.501762  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:49.501767  319301 cri.go:96] found id: ""
	I1227 20:10:49.501774  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:49.501850  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.505843  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.509390  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:49.509489  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:49.543998  319301 cri.go:96] found id: ""
	I1227 20:10:49.544022  319301 logs.go:282] 0 containers: []
	W1227 20:10:49.544041  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:49.544049  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:49.544107  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:49.570494  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:49.570517  319301 cri.go:96] found id: ""
	I1227 20:10:49.570525  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:49.570581  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.574401  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:49.574471  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:49.603448  319301 cri.go:96] found id: ""
	I1227 20:10:49.603475  319301 logs.go:282] 0 containers: []
	W1227 20:10:49.603486  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:49.603500  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:49.603573  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:49.633356  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:49.633379  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:49.633385  319301 cri.go:96] found id: ""
	I1227 20:10:49.633392  319301 logs.go:282] 2 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:49.633474  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.637216  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.641370  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:49.641472  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:49.669518  319301 cri.go:96] found id: ""
	I1227 20:10:49.669557  319301 logs.go:282] 0 containers: []
	W1227 20:10:49.669567  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:49.669576  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:49.669588  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:49.696361  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:49.696389  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:49.721155  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:49.721184  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:49.753420  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:49.753489  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:49.832989  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:49.833025  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:49.874986  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:49.875013  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:49.978286  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:49.978321  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:49.997322  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:49.997351  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:50.080526  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:50.072015    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.072678    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.074595    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.075259    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.076874    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:50.072015    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.072678    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.074595    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.075259    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.076874    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:50.080546  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:50.080560  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:50.139866  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:50.139902  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:50.184649  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:50.184682  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:52.713968  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:52.726778  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:52.726855  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:52.758017  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:52.758040  319301 cri.go:96] found id: ""
	I1227 20:10:52.758049  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:52.758104  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:52.761780  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:52.761855  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:52.789053  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:52.789076  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:52.789081  319301 cri.go:96] found id: ""
	I1227 20:10:52.789088  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:52.789140  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:52.792812  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:52.796144  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:52.796211  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:52.825853  319301 cri.go:96] found id: ""
	I1227 20:10:52.825883  319301 logs.go:282] 0 containers: []
	W1227 20:10:52.825892  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:52.825898  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:52.825955  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:52.851800  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:52.851820  319301 cri.go:96] found id: ""
	I1227 20:10:52.851828  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:52.851881  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:52.855382  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:52.855455  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:52.885699  319301 cri.go:96] found id: ""
	I1227 20:10:52.885721  319301 logs.go:282] 0 containers: []
	W1227 20:10:52.885736  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:52.885742  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:52.885800  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:52.911251  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:52.911316  319301 cri.go:96] found id: ""
	I1227 20:10:52.911339  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:10:52.911402  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:52.914760  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:52.914841  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:52.939685  319301 cri.go:96] found id: ""
	I1227 20:10:52.939718  319301 logs.go:282] 0 containers: []
	W1227 20:10:52.939728  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:52.939742  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:52.939789  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:53.033951  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:53.033990  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:53.052877  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:53.052906  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:53.096670  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:53.096715  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:53.128695  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:53.128722  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:53.161100  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:53.161130  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:53.227545  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:53.218833    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.219420    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.221028    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.221951    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.223525    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:53.218833    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.219420    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.221028    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.221951    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.223525    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:53.227617  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:53.227640  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:53.255984  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:53.256125  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:53.313035  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:53.313074  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:53.338975  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:53.339057  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:55.915383  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:55.925492  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:55.925565  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:55.952010  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:55.952028  319301 cri.go:96] found id: ""
	I1227 20:10:55.952037  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:55.952092  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:55.955593  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:55.955667  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:55.986538  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:55.986561  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:55.986567  319301 cri.go:96] found id: ""
	I1227 20:10:55.986574  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:55.986628  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:55.990714  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:55.995050  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:55.995121  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:56.024488  319301 cri.go:96] found id: ""
	I1227 20:10:56.024565  319301 logs.go:282] 0 containers: []
	W1227 20:10:56.024588  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:56.024612  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:56.024696  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:56.056966  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:56.057039  319301 cri.go:96] found id: ""
	I1227 20:10:56.057065  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:56.057155  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:56.061997  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:56.062234  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:56.089345  319301 cri.go:96] found id: ""
	I1227 20:10:56.089372  319301 logs.go:282] 0 containers: []
	W1227 20:10:56.089381  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:56.089388  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:56.089488  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:56.117758  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:56.117782  319301 cri.go:96] found id: ""
	I1227 20:10:56.117790  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:10:56.117845  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:56.121319  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:56.121432  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:56.147067  319301 cri.go:96] found id: ""
	I1227 20:10:56.147092  319301 logs.go:282] 0 containers: []
	W1227 20:10:56.147102  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:56.147115  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:56.147130  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:56.224179  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:56.224218  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:56.256694  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:56.256721  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:56.283858  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:56.283889  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:56.353505  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:56.342078    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.343458    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.346135    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.347096    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.347948    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:56.342078    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.343458    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.346135    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.347096    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.347948    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:56.353534  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:56.353548  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:56.399836  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:56.399870  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:56.494637  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:56.494677  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:56.528262  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:56.528292  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:56.577163  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:56.577198  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:56.605916  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:56.605945  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:59.134704  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:59.144988  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:59.145094  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:59.170826  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:59.170846  319301 cri.go:96] found id: ""
	I1227 20:10:59.170859  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:59.170916  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:59.174542  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:59.174618  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:59.204712  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:59.204734  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:59.204738  319301 cri.go:96] found id: ""
	I1227 20:10:59.204746  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:59.204800  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:59.208625  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:59.212119  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:59.212200  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:59.241075  319301 cri.go:96] found id: ""
	I1227 20:10:59.241150  319301 logs.go:282] 0 containers: []
	W1227 20:10:59.241174  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:59.241195  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:59.241312  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:59.277168  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:59.277252  319301 cri.go:96] found id: ""
	I1227 20:10:59.277274  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:59.277366  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:59.281934  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:59.282029  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:59.307601  319301 cri.go:96] found id: ""
	I1227 20:10:59.307627  319301 logs.go:282] 0 containers: []
	W1227 20:10:59.307636  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:59.307643  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:59.307704  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:59.341899  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:59.341923  319301 cri.go:96] found id: ""
	I1227 20:10:59.341931  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:10:59.341999  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:59.345734  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:59.345844  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:59.371593  319301 cri.go:96] found id: ""
	I1227 20:10:59.371661  319301 logs.go:282] 0 containers: []
	W1227 20:10:59.371683  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:59.371716  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:59.371755  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:59.464618  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:59.464654  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:59.483758  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:59.483793  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:59.555654  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:59.546856    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.547308    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.548491    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.548938    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.550344    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:59.546856    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.547308    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.548491    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.548938    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.550344    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:59.555678  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:59.555696  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:59.583971  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:59.584004  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:59.635084  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:59.635118  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:59.662345  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:59.662375  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:59.726915  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:59.726950  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:59.754060  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:59.754094  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:59.836493  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:59.836534  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:02.376222  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:02.386794  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:02.386868  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:02.419031  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:02.419054  319301 cri.go:96] found id: ""
	I1227 20:11:02.419062  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:02.419118  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:02.423033  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:02.423106  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:02.448867  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:02.448891  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:02.448896  319301 cri.go:96] found id: ""
	I1227 20:11:02.448903  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:02.448957  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:02.452561  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:02.455963  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:02.456070  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:02.484254  319301 cri.go:96] found id: ""
	I1227 20:11:02.484281  319301 logs.go:282] 0 containers: []
	W1227 20:11:02.484290  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:02.484297  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:02.484357  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:02.511483  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:02.511506  319301 cri.go:96] found id: ""
	I1227 20:11:02.511515  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:02.511580  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:02.515291  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:02.515364  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:02.542839  319301 cri.go:96] found id: ""
	I1227 20:11:02.542866  319301 logs.go:282] 0 containers: []
	W1227 20:11:02.542886  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:02.542894  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:02.543025  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:02.576471  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:02.576505  319301 cri.go:96] found id: ""
	I1227 20:11:02.576519  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:02.576578  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:02.580126  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:02.580205  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:02.610225  319301 cri.go:96] found id: ""
	I1227 20:11:02.610252  319301 logs.go:282] 0 containers: []
	W1227 20:11:02.610261  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:02.610275  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:02.610316  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:02.640738  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:02.640766  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:02.688087  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:02.688120  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:02.714149  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:02.714175  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:02.743134  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:02.743161  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:02.822169  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:02.822206  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:02.894561  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:02.894595  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:02.936069  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:02.936096  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:03.036539  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:03.036573  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:03.054449  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:03.054480  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:03.132045  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:03.124246    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.125028    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.126504    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.127054    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.128486    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:03.124246    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.125028    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.126504    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.127054    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.128486    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:05.633596  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:05.644441  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:05.644564  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:05.671495  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:05.671520  319301 cri.go:96] found id: ""
	I1227 20:11:05.671528  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:05.671603  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:05.675058  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:05.675148  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:05.699421  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:05.699443  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:05.699448  319301 cri.go:96] found id: ""
	I1227 20:11:05.699456  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:05.699512  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:05.703223  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:05.706661  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:05.706747  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:05.731295  319301 cri.go:96] found id: ""
	I1227 20:11:05.731319  319301 logs.go:282] 0 containers: []
	W1227 20:11:05.731328  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:05.731334  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:05.731409  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:05.758394  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:05.758427  319301 cri.go:96] found id: ""
	I1227 20:11:05.758435  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:05.758500  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:05.762213  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:05.762304  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:05.788439  319301 cri.go:96] found id: ""
	I1227 20:11:05.788465  319301 logs.go:282] 0 containers: []
	W1227 20:11:05.788473  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:05.788480  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:05.788546  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:05.814115  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:05.814137  319301 cri.go:96] found id: ""
	I1227 20:11:05.814145  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:05.814199  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:05.817823  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:05.817893  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:05.844939  319301 cri.go:96] found id: ""
	I1227 20:11:05.844963  319301 logs.go:282] 0 containers: []
	W1227 20:11:05.844973  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:05.844988  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:05.845002  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:05.863023  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:05.863054  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:05.932754  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:05.924777    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.925338    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.926988    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.927561    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.928952    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:05.924777    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.925338    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.926988    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.927561    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.928952    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:05.932785  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:05.932802  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:05.960574  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:05.960604  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:06.004048  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:06.004082  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:06.055406  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:06.055441  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:06.082613  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:06.082643  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:06.115617  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:06.115646  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:06.149699  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:06.149729  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:06.250917  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:06.250950  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:08.830917  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:08.841316  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:08.841404  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:08.871386  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:08.871407  319301 cri.go:96] found id: ""
	I1227 20:11:08.871415  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:08.871483  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:08.875249  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:08.875334  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:08.905155  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:08.905178  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:08.905182  319301 cri.go:96] found id: ""
	I1227 20:11:08.905189  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:08.905256  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:08.909157  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:08.912623  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:08.912696  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:08.940125  319301 cri.go:96] found id: ""
	I1227 20:11:08.940151  319301 logs.go:282] 0 containers: []
	W1227 20:11:08.940161  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:08.940168  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:08.940228  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:08.979078  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:08.979099  319301 cri.go:96] found id: ""
	I1227 20:11:08.979115  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:08.979172  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:08.982993  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:08.983079  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:09.010456  319301 cri.go:96] found id: ""
	I1227 20:11:09.010482  319301 logs.go:282] 0 containers: []
	W1227 20:11:09.010491  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:09.010498  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:09.010559  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:09.046193  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:09.046226  319301 cri.go:96] found id: ""
	I1227 20:11:09.046235  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:09.046293  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:09.050361  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:09.050429  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:09.076865  319301 cri.go:96] found id: ""
	I1227 20:11:09.076892  319301 logs.go:282] 0 containers: []
	W1227 20:11:09.076901  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:09.076917  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:09.076929  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:09.103766  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:09.103793  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:09.121384  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:09.121412  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:09.190959  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:09.182712    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.183470    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.185037    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.185570    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.187248    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:09.182712    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.183470    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.185037    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.185570    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.187248    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:09.191026  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:09.191058  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:09.238609  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:09.238648  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:09.332804  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:09.332844  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:09.374845  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:09.374874  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:09.475731  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:09.475770  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:09.505046  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:09.505075  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:09.550742  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:09.550779  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:12.077490  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:12.089114  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:12.089187  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:12.117965  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:12.117987  319301 cri.go:96] found id: ""
	I1227 20:11:12.117995  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:12.118048  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:12.121654  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:12.121727  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:12.150616  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:12.150645  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:12.150650  319301 cri.go:96] found id: ""
	I1227 20:11:12.150658  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:12.150714  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:12.154526  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:12.157975  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:12.158059  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:12.188379  319301 cri.go:96] found id: ""
	I1227 20:11:12.188406  319301 logs.go:282] 0 containers: []
	W1227 20:11:12.188415  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:12.188421  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:12.188479  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:12.214099  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:12.214125  319301 cri.go:96] found id: ""
	I1227 20:11:12.214134  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:12.214187  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:12.217805  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:12.217871  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:12.244974  319301 cri.go:96] found id: ""
	I1227 20:11:12.244999  319301 logs.go:282] 0 containers: []
	W1227 20:11:12.245008  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:12.245015  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:12.245071  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:12.281031  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:12.281071  319301 cri.go:96] found id: ""
	I1227 20:11:12.281079  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:12.281146  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:12.284926  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:12.285004  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:12.311055  319301 cri.go:96] found id: ""
	I1227 20:11:12.311079  319301 logs.go:282] 0 containers: []
	W1227 20:11:12.311088  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:12.311101  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:12.311113  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:12.330032  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:12.330065  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:12.359973  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:12.360000  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:12.405129  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:12.405163  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:12.460783  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:12.460817  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:12.488201  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:12.488230  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:12.565465  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:12.565502  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:12.662969  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:12.663007  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:12.735836  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:12.727495    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.728366    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.730010    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.730324    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.731834    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:12.727495    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.728366    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.730010    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.730324    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.731834    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:12.735859  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:12.735872  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:12.763143  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:12.763168  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:15.305823  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:15.318015  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:15.318113  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:15.347994  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:15.348017  319301 cri.go:96] found id: ""
	I1227 20:11:15.348026  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:15.348089  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:15.351955  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:15.352056  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:15.378004  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:15.378026  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:15.378031  319301 cri.go:96] found id: ""
	I1227 20:11:15.378038  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:15.378091  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:15.381599  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:15.384824  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:15.384889  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:15.409597  319301 cri.go:96] found id: ""
	I1227 20:11:15.409673  319301 logs.go:282] 0 containers: []
	W1227 20:11:15.409695  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:15.409716  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:15.409805  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:15.436026  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:15.436091  319301 cri.go:96] found id: ""
	I1227 20:11:15.436114  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:15.436205  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:15.439709  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:15.439776  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:15.472950  319301 cri.go:96] found id: ""
	I1227 20:11:15.472974  319301 logs.go:282] 0 containers: []
	W1227 20:11:15.472983  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:15.472990  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:15.473047  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:15.503060  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:15.503083  319301 cri.go:96] found id: ""
	I1227 20:11:15.503092  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:15.503166  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:15.506772  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:15.506841  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:15.531805  319301 cri.go:96] found id: ""
	I1227 20:11:15.531828  319301 logs.go:282] 0 containers: []
	W1227 20:11:15.531837  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:15.531849  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:15.531861  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:15.557217  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:15.557253  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:15.583522  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:15.583550  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:15.646957  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:15.646994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:15.677573  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:15.677601  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:15.763080  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:15.763117  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:15.795445  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:15.795473  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:15.895027  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:15.895063  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:15.914036  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:15.914065  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:15.990029  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:15.981434    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.982226    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.983747    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.984333    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.986074    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:15.981434    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.982226    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.983747    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.984333    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.986074    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:15.990048  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:15.990061  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:18.535347  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:18.545638  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:18.545712  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:18.573096  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:18.573125  319301 cri.go:96] found id: ""
	I1227 20:11:18.573135  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:18.573190  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:18.577413  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:18.577512  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:18.604633  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:18.604657  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:18.604662  319301 cri.go:96] found id: ""
	I1227 20:11:18.604670  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:18.604724  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:18.610098  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:18.613744  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:18.613821  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:18.645090  319301 cri.go:96] found id: ""
	I1227 20:11:18.645116  319301 logs.go:282] 0 containers: []
	W1227 20:11:18.645126  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:18.645132  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:18.645191  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:18.671681  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:18.671705  319301 cri.go:96] found id: ""
	I1227 20:11:18.671713  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:18.671768  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:18.675284  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:18.675356  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:18.701086  319301 cri.go:96] found id: ""
	I1227 20:11:18.701109  319301 logs.go:282] 0 containers: []
	W1227 20:11:18.701117  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:18.701123  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:18.701183  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:18.733157  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:18.733176  319301 cri.go:96] found id: ""
	I1227 20:11:18.733185  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:18.733237  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:18.736898  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:18.736978  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:18.761319  319301 cri.go:96] found id: ""
	I1227 20:11:18.761340  319301 logs.go:282] 0 containers: []
	W1227 20:11:18.761349  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:18.761362  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:18.761374  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:18.793077  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:18.793104  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:18.819425  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:18.819453  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:18.859846  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:18.859919  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:18.938269  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:18.938303  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:19.040817  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:19.040856  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:19.059170  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:19.059202  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:19.132074  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:19.121248    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.122916    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.123583    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.125207    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.125782    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:19.121248    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.122916    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.123583    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.125207    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.125782    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:19.132096  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:19.132111  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:19.179880  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:19.179916  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:19.223928  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:19.223963  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:21.759181  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:21.769762  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:21.769833  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:21.800302  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:21.800323  319301 cri.go:96] found id: ""
	I1227 20:11:21.800332  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:21.800395  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:21.804375  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:21.804458  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:21.830687  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:21.830711  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:21.830717  319301 cri.go:96] found id: ""
	I1227 20:11:21.830724  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:21.830779  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:21.834661  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:21.838097  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:21.838198  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:21.864157  319301 cri.go:96] found id: ""
	I1227 20:11:21.864183  319301 logs.go:282] 0 containers: []
	W1227 20:11:21.864193  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:21.864199  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:21.864292  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:21.890722  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:21.890747  319301 cri.go:96] found id: ""
	I1227 20:11:21.890756  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:21.890812  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:21.894377  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:21.894447  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:21.921902  319301 cri.go:96] found id: ""
	I1227 20:11:21.921932  319301 logs.go:282] 0 containers: []
	W1227 20:11:21.921941  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:21.921948  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:21.922013  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:21.948157  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:21.948181  319301 cri.go:96] found id: ""
	I1227 20:11:21.948190  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:21.948246  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:21.951860  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:21.951928  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:21.979147  319301 cri.go:96] found id: ""
	I1227 20:11:21.979171  319301 logs.go:282] 0 containers: []
	W1227 20:11:21.979181  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:21.979222  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:21.979242  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:22.077716  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:22.077768  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:22.161527  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:22.149113    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.149745    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.154386    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.154984    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.157780    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:22.149113    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.149745    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.154386    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.154984    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.157780    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:22.161553  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:22.161566  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:22.193359  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:22.193386  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:22.247574  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:22.247611  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:22.302993  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:22.303034  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:22.332035  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:22.332064  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:22.358225  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:22.358265  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:22.437089  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:22.437124  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:22.455750  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:22.455781  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:24.990837  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:25.001120  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:25.001190  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:25.040369  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:25.040388  319301 cri.go:96] found id: ""
	I1227 20:11:25.040396  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:25.040452  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:25.044321  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:25.044388  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:25.075240  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:25.075264  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:25.075268  319301 cri.go:96] found id: ""
	I1227 20:11:25.075276  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:25.075331  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:25.079221  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:25.083046  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:25.083117  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:25.111437  319301 cri.go:96] found id: ""
	I1227 20:11:25.111466  319301 logs.go:282] 0 containers: []
	W1227 20:11:25.111475  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:25.111482  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:25.111540  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:25.139474  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:25.139498  319301 cri.go:96] found id: ""
	I1227 20:11:25.139507  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:25.139572  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:25.143469  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:25.143540  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:25.177080  319301 cri.go:96] found id: ""
	I1227 20:11:25.177103  319301 logs.go:282] 0 containers: []
	W1227 20:11:25.177112  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:25.177119  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:25.177235  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:25.204123  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:25.204146  319301 cri.go:96] found id: ""
	I1227 20:11:25.204155  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:25.204238  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:25.207906  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:25.207978  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:25.233127  319301 cri.go:96] found id: ""
	I1227 20:11:25.233150  319301 logs.go:282] 0 containers: []
	W1227 20:11:25.233160  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:25.233175  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:25.233187  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:25.252764  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:25.252793  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:25.302886  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:25.302924  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:25.327231  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:25.327259  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:25.357720  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:25.357749  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:25.396486  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:25.396513  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:25.469872  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:25.461875    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.462332    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.464006    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.464571    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.466153    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:25.461875    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.462332    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.464006    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.464571    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.466153    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:25.469894  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:25.469907  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:25.498176  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:25.498204  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:25.547245  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:25.547279  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:25.629600  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:25.629639  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:28.230549  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:28.241564  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:28.241641  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:28.279080  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:28.279110  319301 cri.go:96] found id: ""
	I1227 20:11:28.279119  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:28.279185  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:28.284314  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:28.284405  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:28.316322  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:28.316389  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:28.316408  319301 cri.go:96] found id: ""
	I1227 20:11:28.316436  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:28.316522  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:28.320358  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:28.323910  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:28.324004  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:28.354101  319301 cri.go:96] found id: ""
	I1227 20:11:28.354172  319301 logs.go:282] 0 containers: []
	W1227 20:11:28.354195  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:28.354221  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:28.354308  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:28.381894  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:28.381933  319301 cri.go:96] found id: ""
	I1227 20:11:28.381944  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:28.382007  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:28.385565  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:28.385640  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:28.412036  319301 cri.go:96] found id: ""
	I1227 20:11:28.412063  319301 logs.go:282] 0 containers: []
	W1227 20:11:28.412072  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:28.412079  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:28.412136  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:28.437133  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:28.437154  319301 cri.go:96] found id: ""
	I1227 20:11:28.437162  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:28.437216  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:28.440922  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:28.441006  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:28.469470  319301 cri.go:96] found id: ""
	I1227 20:11:28.469495  319301 logs.go:282] 0 containers: []
	W1227 20:11:28.469505  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:28.469518  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:28.469531  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:28.512248  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:28.512281  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:28.538806  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:28.538834  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:28.615719  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:28.615756  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:28.651963  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:28.651992  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:28.753577  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:28.753616  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:28.770745  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:28.770778  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:28.798843  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:28.798878  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:28.867106  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:28.858730    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.859584    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.861103    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.861408    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.863356    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:28.858730    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.859584    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.861103    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.861408    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.863356    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:28.867124  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:28.867137  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:28.897868  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:28.897897  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:31.455673  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:31.466341  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:31.466412  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:31.494286  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:31.494305  319301 cri.go:96] found id: ""
	I1227 20:11:31.494312  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:31.494368  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:31.499152  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:31.499229  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:31.525626  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:31.525647  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:31.525651  319301 cri.go:96] found id: ""
	I1227 20:11:31.525666  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:31.525721  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:31.529291  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:31.532543  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:31.532612  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:31.558153  319301 cri.go:96] found id: ""
	I1227 20:11:31.558178  319301 logs.go:282] 0 containers: []
	W1227 20:11:31.558187  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:31.558193  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:31.558274  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:31.585024  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:31.585047  319301 cri.go:96] found id: ""
	I1227 20:11:31.585055  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:31.585109  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:31.588772  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:31.588841  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:31.615373  319301 cri.go:96] found id: ""
	I1227 20:11:31.615398  319301 logs.go:282] 0 containers: []
	W1227 20:11:31.615408  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:31.615414  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:31.615474  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:31.644548  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:31.644571  319301 cri.go:96] found id: ""
	I1227 20:11:31.644579  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:31.644634  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:31.648326  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:31.648396  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:31.674106  319301 cri.go:96] found id: ""
	I1227 20:11:31.674128  319301 logs.go:282] 0 containers: []
	W1227 20:11:31.674137  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:31.674152  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:31.674165  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:31.769885  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:31.769924  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:31.787798  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:31.787829  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:31.840240  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:31.840276  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:31.883880  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:31.883914  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:31.912615  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:31.912645  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:31.993762  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:31.993796  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:32.038771  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:32.038807  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:32.113504  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:32.105141    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.106007    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.106783    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.108406    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.108703    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:32.105141    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.106007    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.106783    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.108406    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.108703    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:32.113531  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:32.113545  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:32.145482  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:32.145508  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:34.675972  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:34.687181  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:34.687251  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:34.713741  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:34.713768  319301 cri.go:96] found id: ""
	I1227 20:11:34.713776  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:34.713837  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:34.717422  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:34.717525  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:34.742801  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:34.742824  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:34.742829  319301 cri.go:96] found id: ""
	I1227 20:11:34.742836  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:34.742890  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:34.746901  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:34.750347  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:34.750438  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:34.776122  319301 cri.go:96] found id: ""
	I1227 20:11:34.776156  319301 logs.go:282] 0 containers: []
	W1227 20:11:34.776165  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:34.776173  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:34.776241  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:34.801663  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:34.801687  319301 cri.go:96] found id: ""
	I1227 20:11:34.801696  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:34.801752  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:34.805521  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:34.805600  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:34.839033  319301 cri.go:96] found id: ""
	I1227 20:11:34.839059  319301 logs.go:282] 0 containers: []
	W1227 20:11:34.839068  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:34.839075  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:34.839164  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:34.875359  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:34.875380  319301 cri.go:96] found id: ""
	I1227 20:11:34.875389  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:34.875444  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:34.879108  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:34.879203  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:34.904808  319301 cri.go:96] found id: ""
	I1227 20:11:34.904831  319301 logs.go:282] 0 containers: []
	W1227 20:11:34.904839  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:34.904882  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:34.904902  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:35.001157  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:35.001197  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:35.036396  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:35.036492  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:35.100412  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:35.100452  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:35.130486  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:35.130514  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:35.212133  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:35.212170  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:35.261425  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:35.261489  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:35.279972  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:35.280002  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:35.344789  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:35.336875    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.337423    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.338974    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.339514    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.340959    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:35.336875    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.337423    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.338974    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.339514    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.340959    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:35.344811  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:35.344826  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:35.388398  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:35.388438  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:37.916139  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:37.926579  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:37.926656  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:37.957965  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:37.957990  319301 cri.go:96] found id: ""
	I1227 20:11:37.958011  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:37.958064  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:37.961819  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:37.961939  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:37.990732  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:37.990756  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:37.990763  319301 cri.go:96] found id: ""
	I1227 20:11:37.990774  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:37.990832  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:37.994865  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:37.998563  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:37.998657  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:38.029180  319301 cri.go:96] found id: ""
	I1227 20:11:38.029206  319301 logs.go:282] 0 containers: []
	W1227 20:11:38.029228  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:38.029235  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:38.029302  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:38.058262  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:38.058287  319301 cri.go:96] found id: ""
	I1227 20:11:38.058295  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:38.058390  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:38.062798  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:38.062895  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:38.093594  319301 cri.go:96] found id: ""
	I1227 20:11:38.093630  319301 logs.go:282] 0 containers: []
	W1227 20:11:38.093641  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:38.093647  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:38.093723  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:38.122677  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:38.122700  319301 cri.go:96] found id: ""
	I1227 20:11:38.122710  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:38.122784  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:38.126481  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:38.126556  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:38.152399  319301 cri.go:96] found id: ""
	I1227 20:11:38.152425  319301 logs.go:282] 0 containers: []
	W1227 20:11:38.152434  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:38.152447  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:38.152459  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:38.169834  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:38.169865  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:38.236553  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:38.228832    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.229398    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.230976    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.231455    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.232939    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:38.228832    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.229398    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.230976    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.231455    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.232939    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:38.236574  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:38.236587  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:38.283907  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:38.283942  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:38.327559  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:38.327595  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:38.354915  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:38.354944  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:38.385535  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:38.385567  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:38.482920  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:38.482955  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:38.513709  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:38.513737  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:38.541063  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:38.541092  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:41.120061  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:41.130482  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:41.130560  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:41.157933  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:41.157995  319301 cri.go:96] found id: ""
	I1227 20:11:41.158011  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:41.158068  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:41.161515  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:41.161587  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:41.186761  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:41.186784  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:41.186789  319301 cri.go:96] found id: ""
	I1227 20:11:41.186796  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:41.186853  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:41.190548  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:41.194929  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:41.195019  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:41.225573  319301 cri.go:96] found id: ""
	I1227 20:11:41.225600  319301 logs.go:282] 0 containers: []
	W1227 20:11:41.225609  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:41.225615  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:41.225678  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:41.255736  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:41.255810  319301 cri.go:96] found id: ""
	I1227 20:11:41.255833  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:41.255924  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:41.259619  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:41.259730  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:41.293635  319301 cri.go:96] found id: ""
	I1227 20:11:41.293658  319301 logs.go:282] 0 containers: []
	W1227 20:11:41.293667  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:41.293674  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:41.293736  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:41.325226  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:41.325248  319301 cri.go:96] found id: ""
	I1227 20:11:41.325257  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:41.325311  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:41.328850  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:41.328919  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:41.356320  319301 cri.go:96] found id: ""
	I1227 20:11:41.356345  319301 logs.go:282] 0 containers: []
	W1227 20:11:41.356354  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:41.356370  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:41.356383  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:41.384750  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:41.384777  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:41.438279  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:41.438315  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:41.496771  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:41.496814  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:41.525343  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:41.525373  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:41.558207  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:41.558235  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:41.657075  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:41.657112  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:41.689798  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:41.689828  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:41.769585  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:41.769620  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:41.787874  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:41.787906  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:41.852555  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:41.844441    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.845015    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.846678    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.847233    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.849010    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:41.844441    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.845015    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.846678    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.847233    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.849010    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:44.353586  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:44.364496  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:44.364591  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:44.396750  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:44.396823  319301 cri.go:96] found id: ""
	I1227 20:11:44.396848  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:44.396920  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:44.400610  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:44.400687  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:44.428171  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:44.428250  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:44.428271  319301 cri.go:96] found id: ""
	I1227 20:11:44.428296  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:44.428411  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:44.432219  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:44.435828  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:44.435901  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:44.464904  319301 cri.go:96] found id: ""
	I1227 20:11:44.464931  319301 logs.go:282] 0 containers: []
	W1227 20:11:44.464953  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:44.464960  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:44.465019  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:44.494508  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:44.494537  319301 cri.go:96] found id: ""
	I1227 20:11:44.494546  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:44.494602  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:44.498485  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:44.498588  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:44.526221  319301 cri.go:96] found id: ""
	I1227 20:11:44.526249  319301 logs.go:282] 0 containers: []
	W1227 20:11:44.526258  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:44.526264  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:44.526337  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:44.557553  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:44.557629  319301 cri.go:96] found id: ""
	I1227 20:11:44.557644  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:44.557713  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:44.561435  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:44.561578  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:44.588202  319301 cri.go:96] found id: ""
	I1227 20:11:44.588227  319301 logs.go:282] 0 containers: []
	W1227 20:11:44.588236  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:44.588250  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:44.588281  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:44.636647  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:44.636688  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:44.715003  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:44.715041  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:44.746461  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:44.746488  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:44.840354  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:44.840392  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:44.910107  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:44.902375    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.902947    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.904566    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.905162    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.906700    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:44.902375    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.902947    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.904566    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.905162    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.906700    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:44.910127  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:44.910139  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:44.958123  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:44.958155  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:44.988455  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:44.988486  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:45.017637  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:45.017669  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:45.068015  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:45.068047  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:47.639577  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:47.650807  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:47.650879  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:47.680709  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:47.680780  319301 cri.go:96] found id: ""
	I1227 20:11:47.680801  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:47.680886  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:47.684862  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:47.684933  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:47.711503  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:47.711527  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:47.711533  319301 cri.go:96] found id: ""
	I1227 20:11:47.711541  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:47.711597  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:47.715323  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:47.718860  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:47.718939  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:47.745091  319301 cri.go:96] found id: ""
	I1227 20:11:47.745118  319301 logs.go:282] 0 containers: []
	W1227 20:11:47.745128  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:47.745134  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:47.745190  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:47.774661  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:47.774683  319301 cri.go:96] found id: ""
	I1227 20:11:47.774691  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:47.774751  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:47.778781  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:47.778879  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:47.805242  319301 cri.go:96] found id: ""
	I1227 20:11:47.805268  319301 logs.go:282] 0 containers: []
	W1227 20:11:47.805278  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:47.805284  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:47.805350  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:47.833172  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:47.833240  319301 cri.go:96] found id: ""
	I1227 20:11:47.833262  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:47.833351  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:47.837087  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:47.837159  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:47.865275  319301 cri.go:96] found id: ""
	I1227 20:11:47.865353  319301 logs.go:282] 0 containers: []
	W1227 20:11:47.865380  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:47.865432  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:47.865505  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:47.944986  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:47.945022  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:47.980482  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:47.980511  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:47.999608  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:47.999639  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:48.076328  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:48.067348    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.068343    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.070039    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.070763    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.072273    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:48.067348    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.068343    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.070039    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.070763    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.072273    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:48.076352  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:48.076365  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:48.102940  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:48.102968  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:48.195452  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:48.195490  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:48.225373  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:48.225402  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:48.273525  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:48.273604  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:48.325768  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:48.325805  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:50.855952  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:50.867387  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:50.867456  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:50.897533  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:50.897556  319301 cri.go:96] found id: ""
	I1227 20:11:50.897565  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:50.897617  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:50.900982  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:50.901048  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:50.935428  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:50.935450  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:50.935455  319301 cri.go:96] found id: ""
	I1227 20:11:50.935468  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:50.935521  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:50.939266  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:50.943149  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:50.943266  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:50.974808  319301 cri.go:96] found id: ""
	I1227 20:11:50.974842  319301 logs.go:282] 0 containers: []
	W1227 20:11:50.974852  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:50.974859  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:50.974928  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:51.001867  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:51.001890  319301 cri.go:96] found id: ""
	I1227 20:11:51.001899  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:51.001957  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:51.005758  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:51.005831  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:51.035904  319301 cri.go:96] found id: ""
	I1227 20:11:51.035979  319301 logs.go:282] 0 containers: []
	W1227 20:11:51.036002  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:51.036026  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:51.036134  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:51.064190  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:51.064213  319301 cri.go:96] found id: ""
	I1227 20:11:51.064222  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:51.064277  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:51.068971  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:51.069043  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:51.098066  319301 cri.go:96] found id: ""
	I1227 20:11:51.098092  319301 logs.go:282] 0 containers: []
	W1227 20:11:51.098101  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:51.098116  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:51.098128  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:51.193690  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:51.193731  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:51.236544  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:51.236578  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:51.275361  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:51.275397  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:51.309801  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:51.309827  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:51.327683  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:51.327711  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:51.401236  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:51.392227    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.393287    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.394285    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.395538    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.396222    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:51.392227    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.393287    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.394285    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.395538    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.396222    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:51.401259  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:51.401273  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:51.429955  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:51.429985  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:51.492625  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:51.492662  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:51.518481  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:51.518512  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:54.100065  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:54.111435  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:54.111510  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:54.142927  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:54.142956  319301 cri.go:96] found id: ""
	I1227 20:11:54.142975  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:54.143064  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:54.147093  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:54.147233  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:54.173813  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:54.173832  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:54.173837  319301 cri.go:96] found id: ""
	I1227 20:11:54.173844  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:54.173903  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:54.177570  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:54.181008  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:54.181079  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:54.206624  319301 cri.go:96] found id: ""
	I1227 20:11:54.206648  319301 logs.go:282] 0 containers: []
	W1227 20:11:54.206658  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:54.206664  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:54.206720  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:54.232185  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:54.232208  319301 cri.go:96] found id: ""
	I1227 20:11:54.232218  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:54.232281  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:54.236968  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:54.237047  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:54.266150  319301 cri.go:96] found id: ""
	I1227 20:11:54.266172  319301 logs.go:282] 0 containers: []
	W1227 20:11:54.266181  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:54.266187  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:54.266254  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:54.294800  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:54.294820  319301 cri.go:96] found id: ""
	I1227 20:11:54.294829  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:54.294880  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:54.298462  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:54.298526  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:54.323550  319301 cri.go:96] found id: ""
	I1227 20:11:54.323573  319301 logs.go:282] 0 containers: []
	W1227 20:11:54.323582  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:54.323599  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:54.323610  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:54.352757  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:54.352783  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:54.383438  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:54.383464  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:54.473431  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:54.473470  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:54.544121  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:54.535951    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.536753    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.538081    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.538522    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.540194    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:54.535951    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.536753    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.538081    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.538522    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.540194    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:54.544146  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:54.544162  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:54.587199  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:54.587231  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:54.625648  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:54.625675  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:54.708479  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:54.708513  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:54.727026  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:54.727055  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:54.758081  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:54.758110  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:57.311000  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:57.321234  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:57.321311  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:57.349011  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:57.349030  319301 cri.go:96] found id: ""
	I1227 20:11:57.349038  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:57.349091  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:57.353198  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:57.353266  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:57.378464  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:57.378489  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:57.378494  319301 cri.go:96] found id: ""
	I1227 20:11:57.378502  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:57.378564  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:57.382492  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:57.385894  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:57.385975  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:57.410564  319301 cri.go:96] found id: ""
	I1227 20:11:57.410629  319301 logs.go:282] 0 containers: []
	W1227 20:11:57.410642  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:57.410650  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:57.410708  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:57.437790  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:57.437814  319301 cri.go:96] found id: ""
	I1227 20:11:57.437823  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:57.437881  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:57.441526  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:57.441645  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:57.467252  319301 cri.go:96] found id: ""
	I1227 20:11:57.467319  319301 logs.go:282] 0 containers: []
	W1227 20:11:57.467334  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:57.467342  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:57.467406  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:57.495037  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:57.495058  319301 cri.go:96] found id: ""
	I1227 20:11:57.495067  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:57.495123  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:57.498778  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:57.498878  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:57.528106  319301 cri.go:96] found id: ""
	I1227 20:11:57.528133  319301 logs.go:282] 0 containers: []
	W1227 20:11:57.528142  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:57.528155  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:57.528168  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:57.619388  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:57.619424  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:57.650304  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:57.650332  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:57.699631  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:57.699667  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:57.743221  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:57.743254  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:57.769136  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:57.769164  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:57.786763  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:57.786790  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:57.859691  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:57.849669    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.850063    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.853911    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.854484    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.856001    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:57.849669    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.850063    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.853911    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.854484    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.856001    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:57.859713  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:57.859728  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:57.884558  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:57.884586  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:57.961115  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:57.961152  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:00.497672  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:00.510050  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:00.510129  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:00.544933  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:00.544956  319301 cri.go:96] found id: ""
	I1227 20:12:00.544965  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:00.545025  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:00.549158  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:00.549233  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:00.576607  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:00.576630  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:00.576636  319301 cri.go:96] found id: ""
	I1227 20:12:00.576643  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:00.576700  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:00.580716  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:00.584708  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:00.584783  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:00.623469  319301 cri.go:96] found id: ""
	I1227 20:12:00.623492  319301 logs.go:282] 0 containers: []
	W1227 20:12:00.623501  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:00.623508  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:00.623567  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:00.650388  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:00.650460  319301 cri.go:96] found id: ""
	I1227 20:12:00.650476  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:00.650537  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:00.654531  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:00.654613  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:00.685179  319301 cri.go:96] found id: ""
	I1227 20:12:00.685206  319301 logs.go:282] 0 containers: []
	W1227 20:12:00.685215  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:00.685222  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:00.685283  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:00.716017  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:00.716036  319301 cri.go:96] found id: ""
	I1227 20:12:00.716045  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:00.716102  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:00.720897  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:00.720967  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:00.752084  319301 cri.go:96] found id: ""
	I1227 20:12:00.752108  319301 logs.go:282] 0 containers: []
	W1227 20:12:00.752118  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:00.752133  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:00.752145  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:00.779162  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:00.779191  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:00.828229  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:00.828268  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:00.854975  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:00.855005  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:00.883576  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:00.883606  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:00.965151  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:00.965192  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:01.067209  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:01.067248  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:01.085199  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:01.085232  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:01.155625  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:01.146876    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.148053    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.148721    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.149832    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.150397    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:01.146876    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.148053    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.148721    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.149832    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.150397    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:01.155647  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:01.155660  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:01.206940  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:01.206978  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:03.749679  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:03.760472  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:03.760548  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:03.788993  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:03.789016  319301 cri.go:96] found id: ""
	I1227 20:12:03.789024  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:03.789079  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:03.792725  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:03.792798  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:03.817942  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:03.817964  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:03.817969  319301 cri.go:96] found id: ""
	I1227 20:12:03.817975  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:03.818031  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:03.821717  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:03.825168  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:03.825254  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:03.851505  319301 cri.go:96] found id: ""
	I1227 20:12:03.851527  319301 logs.go:282] 0 containers: []
	W1227 20:12:03.851536  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:03.851542  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:03.851606  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:03.878946  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:03.878971  319301 cri.go:96] found id: ""
	I1227 20:12:03.878980  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:03.879043  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:03.883057  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:03.883130  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:03.911906  319301 cri.go:96] found id: ""
	I1227 20:12:03.911933  319301 logs.go:282] 0 containers: []
	W1227 20:12:03.911943  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:03.911950  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:03.912009  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:03.942160  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:03.942183  319301 cri.go:96] found id: ""
	I1227 20:12:03.942192  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:03.942252  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:03.946415  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:03.946666  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:03.979149  319301 cri.go:96] found id: ""
	I1227 20:12:03.979174  319301 logs.go:282] 0 containers: []
	W1227 20:12:03.979182  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:03.979198  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:03.979210  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:04.005778  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:04.005811  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:04.088126  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:04.088160  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:04.119438  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:04.119469  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:04.190373  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:04.181899    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.182747    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.184416    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.184965    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.186575    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:04.181899    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.182747    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.184416    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.184965    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.186575    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:04.190394  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:04.190407  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:04.220233  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:04.220259  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:04.245645  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:04.245671  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:04.345961  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:04.345994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:04.365659  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:04.365694  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:04.417757  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:04.417791  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:06.964717  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:06.979395  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:06.979502  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:07.006920  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:07.006954  319301 cri.go:96] found id: ""
	I1227 20:12:07.006964  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:07.007030  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:07.012095  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:07.012233  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:07.041413  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:07.041494  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:07.041512  319301 cri.go:96] found id: ""
	I1227 20:12:07.041520  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:07.041598  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:07.045354  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:07.049177  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:07.049259  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:07.083301  319301 cri.go:96] found id: ""
	I1227 20:12:07.083329  319301 logs.go:282] 0 containers: []
	W1227 20:12:07.083338  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:07.083344  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:07.083421  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:07.115313  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:07.115338  319301 cri.go:96] found id: ""
	I1227 20:12:07.115347  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:07.115417  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:07.119201  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:07.119288  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:07.146102  319301 cri.go:96] found id: ""
	I1227 20:12:07.146131  319301 logs.go:282] 0 containers: []
	W1227 20:12:07.146140  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:07.146147  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:07.146208  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:07.172141  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:07.172172  319301 cri.go:96] found id: ""
	I1227 20:12:07.172180  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:07.172247  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:07.175941  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:07.176014  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:07.201635  319301 cri.go:96] found id: ""
	I1227 20:12:07.201661  319301 logs.go:282] 0 containers: []
	W1227 20:12:07.201682  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:07.201699  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:07.201711  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:07.267041  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:07.258167    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.258717    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.260273    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.260745    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.262196    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:07.258167    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.258717    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.260273    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.260745    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.262196    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:07.267062  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:07.267076  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:07.299653  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:07.299681  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:07.379741  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:07.379776  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:07.478201  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:07.478238  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:07.496143  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:07.496172  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:07.524943  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:07.524973  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:07.588841  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:07.588883  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:07.639348  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:07.639391  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:07.671575  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:07.671608  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:10.217505  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:10.228493  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:10.228562  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:10.262225  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:10.262248  319301 cri.go:96] found id: ""
	I1227 20:12:10.262256  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:10.262312  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:10.267062  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:10.267197  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:10.296434  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:10.296459  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:10.296464  319301 cri.go:96] found id: ""
	I1227 20:12:10.296472  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:10.296529  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:10.300310  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:10.304957  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:10.305022  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:10.330532  319301 cri.go:96] found id: ""
	I1227 20:12:10.330560  319301 logs.go:282] 0 containers: []
	W1227 20:12:10.330570  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:10.330584  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:10.330646  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:10.361300  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:10.361324  319301 cri.go:96] found id: ""
	I1227 20:12:10.361332  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:10.361394  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:10.365025  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:10.365095  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:10.391129  319301 cri.go:96] found id: ""
	I1227 20:12:10.391150  319301 logs.go:282] 0 containers: []
	W1227 20:12:10.391159  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:10.391165  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:10.391228  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:10.427446  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:10.427467  319301 cri.go:96] found id: ""
	I1227 20:12:10.427475  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:10.427530  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:10.431147  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:10.431236  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:10.457621  319301 cri.go:96] found id: ""
	I1227 20:12:10.457645  319301 logs.go:282] 0 containers: []
	W1227 20:12:10.457653  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:10.457669  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:10.457680  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:10.497801  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:10.497832  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:10.533576  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:10.533606  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:10.563063  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:10.563092  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:10.595636  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:10.595663  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:10.707654  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:10.707734  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:10.727626  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:10.727752  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:10.859705  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:10.846588    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.847467    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.853805    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.854122    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.855621    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:10.846588    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.847467    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.853805    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.854122    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.855621    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:10.859774  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:10.859801  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:10.958101  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:10.958183  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:11.020263  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:11.020358  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:13.639948  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:13.650732  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:13.650797  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:13.676632  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:13.676651  319301 cri.go:96] found id: ""
	I1227 20:12:13.676658  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:13.676710  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.680432  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:13.680542  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:13.711606  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:13.711625  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:13.711630  319301 cri.go:96] found id: ""
	I1227 20:12:13.711637  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:13.711691  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.715265  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.718775  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:13.718931  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:13.746245  319301 cri.go:96] found id: ""
	I1227 20:12:13.746275  319301 logs.go:282] 0 containers: []
	W1227 20:12:13.746291  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:13.746298  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:13.746374  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:13.779388  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:13.779409  319301 cri.go:96] found id: ""
	I1227 20:12:13.779418  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:13.779504  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.783612  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:13.783685  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:13.808842  319301 cri.go:96] found id: ""
	I1227 20:12:13.808863  319301 logs.go:282] 0 containers: []
	W1227 20:12:13.808872  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:13.808878  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:13.808934  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:13.835153  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:13.835174  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:13.835179  319301 cri.go:96] found id: ""
	I1227 20:12:13.835187  319301 logs.go:282] 2 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:13.835249  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.839009  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.842805  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:13.842881  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:13.872544  319301 cri.go:96] found id: ""
	I1227 20:12:13.872570  319301 logs.go:282] 0 containers: []
	W1227 20:12:13.872579  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:13.872587  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:13.872599  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:13.898550  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:13.898578  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:13.924170  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:13.924197  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:14.003535  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:14.003571  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:14.105189  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:14.105228  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:14.176586  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:14.168398    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.169127    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.170691    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.171292    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.172935    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:14.168398    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.169127    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.170691    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.171292    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.172935    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:14.176608  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:14.176622  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:14.204979  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:14.205007  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:14.246862  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:14.246911  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:14.282199  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:14.282225  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:14.315428  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:14.315459  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:14.334814  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:14.334848  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:16.885569  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:16.896097  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:16.896162  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:16.925765  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:16.925785  319301 cri.go:96] found id: ""
	I1227 20:12:16.925794  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:16.925849  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:16.929283  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:16.929349  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:16.954491  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:16.954515  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:16.954520  319301 cri.go:96] found id: ""
	I1227 20:12:16.954528  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:16.954586  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:16.958221  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:16.961382  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:16.961573  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:16.994836  319301 cri.go:96] found id: ""
	I1227 20:12:16.994860  319301 logs.go:282] 0 containers: []
	W1227 20:12:16.994868  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:16.994874  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:16.994933  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:17.021903  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:17.021926  319301 cri.go:96] found id: ""
	I1227 20:12:17.021934  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:17.022017  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:17.025998  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:17.026093  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:17.052024  319301 cri.go:96] found id: ""
	I1227 20:12:17.052049  319301 logs.go:282] 0 containers: []
	W1227 20:12:17.052058  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:17.052083  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:17.052163  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:17.078719  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:17.078740  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:17.078744  319301 cri.go:96] found id: ""
	I1227 20:12:17.078752  319301 logs.go:282] 2 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:17.078826  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:17.082470  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:17.086147  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:17.086220  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:17.116980  319301 cri.go:96] found id: ""
	I1227 20:12:17.117003  319301 logs.go:282] 0 containers: []
	W1227 20:12:17.117013  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:17.117022  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:17.117033  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:17.196379  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:17.196418  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:17.230926  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:17.230959  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:17.250661  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:17.250691  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:17.322817  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:17.314780    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.315442    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.317018    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.317535    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.319106    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:17.314780    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.315442    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.317018    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.317535    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.319106    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:17.322840  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:17.322856  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:17.351684  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:17.351711  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:17.399098  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:17.399132  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:17.490988  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:17.491023  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:17.556151  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:17.556187  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:17.582835  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:17.582871  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:17.613801  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:17.613837  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:20.145063  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:20.156515  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:20.156583  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:20.187608  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:20.187635  319301 cri.go:96] found id: ""
	I1227 20:12:20.187645  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:20.187707  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.192025  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:20.192105  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:20.224749  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:20.224774  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:20.224780  319301 cri.go:96] found id: ""
	I1227 20:12:20.224788  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:20.224847  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.229081  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.233080  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:20.233183  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:20.265194  319301 cri.go:96] found id: ""
	I1227 20:12:20.265217  319301 logs.go:282] 0 containers: []
	W1227 20:12:20.265226  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:20.265233  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:20.265290  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:20.294941  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:20.294965  319301 cri.go:96] found id: ""
	I1227 20:12:20.294974  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:20.295030  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.299194  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:20.299295  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:20.327103  319301 cri.go:96] found id: ""
	I1227 20:12:20.327127  319301 logs.go:282] 0 containers: []
	W1227 20:12:20.327136  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:20.327142  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:20.327225  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:20.355319  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:20.355340  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:20.355351  319301 cri.go:96] found id: ""
	I1227 20:12:20.355359  319301 logs.go:282] 2 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:20.355441  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.359302  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.362848  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:20.362949  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:20.393433  319301 cri.go:96] found id: ""
	I1227 20:12:20.393488  319301 logs.go:282] 0 containers: []
	W1227 20:12:20.393498  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:20.393527  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:20.393545  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:20.421493  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:20.421522  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:20.498925  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:20.498966  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:20.519854  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:20.519883  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:20.576881  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:20.576922  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:20.621620  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:20.621656  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:20.649613  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:20.649648  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:20.685860  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:20.685889  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:20.779036  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:20.779072  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:20.846477  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:20.838325    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.838829    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.840489    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.841069    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.842962    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:20.838325    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.838829    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.840489    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.841069    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.842962    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:20.846497  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:20.846511  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:20.876493  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:20.876523  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:23.407116  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:23.417842  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:23.417914  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:23.449077  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:23.449100  319301 cri.go:96] found id: ""
	I1227 20:12:23.449108  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:23.449162  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:23.452848  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:23.452918  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:23.481566  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:23.481589  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:23.481595  319301 cri.go:96] found id: ""
	I1227 20:12:23.481602  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:23.481661  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:23.485561  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:23.489363  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:23.489433  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:23.515690  319301 cri.go:96] found id: ""
	I1227 20:12:23.515717  319301 logs.go:282] 0 containers: []
	W1227 20:12:23.515727  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:23.515734  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:23.515796  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:23.542113  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:23.542134  319301 cri.go:96] found id: ""
	I1227 20:12:23.542144  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:23.542198  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:23.546461  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:23.546535  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:23.572051  319301 cri.go:96] found id: ""
	I1227 20:12:23.572080  319301 logs.go:282] 0 containers: []
	W1227 20:12:23.572090  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:23.572096  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:23.572154  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:23.598223  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:23.598246  319301 cri.go:96] found id: ""
	I1227 20:12:23.598254  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:23.598308  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:23.602471  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:23.602548  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:23.632139  319301 cri.go:96] found id: ""
	I1227 20:12:23.632162  319301 logs.go:282] 0 containers: []
	W1227 20:12:23.632171  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:23.632185  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:23.632198  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:23.728534  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:23.728573  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:23.746910  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:23.746937  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:23.790408  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:23.790450  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:23.816648  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:23.816683  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:23.844206  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:23.844234  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:23.922341  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:23.922381  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:23.990219  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:23.981959    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.982768    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.984359    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.984673    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.986151    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:23.981959    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.982768    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.984359    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.984673    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.986151    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:23.990238  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:23.990252  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:24.021769  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:24.021804  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:24.077552  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:24.077591  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:26.612708  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:26.623326  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:26.623428  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:26.653266  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:26.653289  319301 cri.go:96] found id: ""
	I1227 20:12:26.653298  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:26.653373  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:26.657260  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:26.657353  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:26.683071  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:26.683092  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:26.683098  319301 cri.go:96] found id: ""
	I1227 20:12:26.683105  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:26.683166  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:26.686901  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:26.690560  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:26.690649  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:26.718862  319301 cri.go:96] found id: ""
	I1227 20:12:26.718885  319301 logs.go:282] 0 containers: []
	W1227 20:12:26.718894  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:26.718900  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:26.718959  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:26.747552  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:26.747574  319301 cri.go:96] found id: ""
	I1227 20:12:26.747582  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:26.747637  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:26.751375  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:26.751452  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:26.777853  319301 cri.go:96] found id: ""
	I1227 20:12:26.777880  319301 logs.go:282] 0 containers: []
	W1227 20:12:26.777889  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:26.777895  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:26.777957  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:26.804445  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:26.804468  319301 cri.go:96] found id: ""
	I1227 20:12:26.804477  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:26.804535  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:26.808568  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:26.808691  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:26.836896  319301 cri.go:96] found id: ""
	I1227 20:12:26.836922  319301 logs.go:282] 0 containers: []
	W1227 20:12:26.836932  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:26.836945  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:26.836960  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:26.857005  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:26.857033  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:26.928707  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:26.920823    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.921472    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.923023    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.923492    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.925222    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:26.920823    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.921472    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.923023    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.923492    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.925222    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:26.928729  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:26.928742  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:26.956493  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:26.956522  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:26.986280  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:26.986306  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:27.076259  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:27.076295  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:27.172547  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:27.172582  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:27.230338  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:27.230374  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:27.276521  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:27.276554  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:27.308603  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:27.308630  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:29.841840  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:29.852151  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:29.852219  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:29.879885  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:29.879922  319301 cri.go:96] found id: ""
	I1227 20:12:29.879931  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:29.880028  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:29.883662  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:29.883731  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:29.912705  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:29.912727  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:29.912733  319301 cri.go:96] found id: ""
	I1227 20:12:29.912740  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:29.912795  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:29.916252  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:29.921161  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:29.921231  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:29.950824  319301 cri.go:96] found id: ""
	I1227 20:12:29.950846  319301 logs.go:282] 0 containers: []
	W1227 20:12:29.950855  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:29.950862  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:29.950917  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:29.986337  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:29.986357  319301 cri.go:96] found id: ""
	I1227 20:12:29.986365  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:29.986420  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:29.990557  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:29.990644  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:30.034984  319301 cri.go:96] found id: ""
	I1227 20:12:30.035016  319301 logs.go:282] 0 containers: []
	W1227 20:12:30.035027  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:30.035034  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:30.035109  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:30.071248  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:30.071274  319301 cri.go:96] found id: ""
	I1227 20:12:30.071284  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:30.071380  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:30.075947  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:30.076061  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:30.105680  319301 cri.go:96] found id: ""
	I1227 20:12:30.105705  319301 logs.go:282] 0 containers: []
	W1227 20:12:30.105715  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:30.105730  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:30.105748  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:30.135961  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:30.135994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:30.216289  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:30.216331  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:30.255913  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:30.255946  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:30.355835  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:30.355870  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:30.429441  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:30.421794    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.422353    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.423860    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.424337    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.426060    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:30.421794    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.422353    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.423860    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.424337    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.426060    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:30.429483  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:30.429495  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:30.458949  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:30.458978  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:30.502640  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:30.502677  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:30.532992  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:30.533023  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:30.557835  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:30.557866  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:33.116429  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:33.127018  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:33.127132  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:33.153291  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:33.153316  319301 cri.go:96] found id: ""
	I1227 20:12:33.153324  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:33.153379  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:33.157166  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:33.157239  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:33.183179  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:33.183200  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:33.183205  319301 cri.go:96] found id: ""
	I1227 20:12:33.183213  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:33.183265  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:33.186752  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:33.190422  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:33.190494  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:33.220717  319301 cri.go:96] found id: ""
	I1227 20:12:33.220739  319301 logs.go:282] 0 containers: []
	W1227 20:12:33.220748  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:33.220754  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:33.220818  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:33.251060  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:33.251083  319301 cri.go:96] found id: ""
	I1227 20:12:33.251091  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:33.251145  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:33.254679  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:33.254748  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:33.286493  319301 cri.go:96] found id: ""
	I1227 20:12:33.286518  319301 logs.go:282] 0 containers: []
	W1227 20:12:33.286527  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:33.286533  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:33.286620  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:33.313587  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:33.313613  319301 cri.go:96] found id: ""
	I1227 20:12:33.313622  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:33.313680  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:33.317328  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:33.317408  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:33.343846  319301 cri.go:96] found id: ""
	I1227 20:12:33.343871  319301 logs.go:282] 0 containers: []
	W1227 20:12:33.343880  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:33.343893  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:33.343925  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:33.438565  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:33.438603  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:33.457675  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:33.457705  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:33.525788  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:33.517888    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.518628    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.520164    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.520718    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.522282    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:33.517888    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.518628    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.520164    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.520718    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.522282    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:33.525811  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:33.525825  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:33.552529  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:33.552556  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:33.580140  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:33.580172  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:33.641393  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:33.641499  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:33.693161  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:33.693199  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:33.724867  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:33.724893  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:33.805497  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:33.805537  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:36.337435  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:36.352136  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:36.352206  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:36.378464  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:36.378486  319301 cri.go:96] found id: ""
	I1227 20:12:36.378494  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:36.378548  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:36.382431  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:36.382500  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:36.408340  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:36.408362  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:36.408367  319301 cri.go:96] found id: ""
	I1227 20:12:36.408375  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:36.408430  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:36.411977  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:36.415450  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:36.415561  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:36.441750  319301 cri.go:96] found id: ""
	I1227 20:12:36.441773  319301 logs.go:282] 0 containers: []
	W1227 20:12:36.441781  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:36.441789  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:36.441849  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:36.469111  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:36.469133  319301 cri.go:96] found id: ""
	I1227 20:12:36.469141  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:36.469193  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:36.472982  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:36.473055  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:36.501345  319301 cri.go:96] found id: ""
	I1227 20:12:36.501368  319301 logs.go:282] 0 containers: []
	W1227 20:12:36.501378  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:36.501384  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:36.501477  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:36.527577  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:36.527600  319301 cri.go:96] found id: ""
	I1227 20:12:36.527608  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:36.527664  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:36.531477  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:36.531552  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:36.561054  319301 cri.go:96] found id: ""
	I1227 20:12:36.561130  319301 logs.go:282] 0 containers: []
	W1227 20:12:36.561154  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:36.561181  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:36.561217  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:36.589983  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:36.590014  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:36.669955  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:36.669994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:36.768958  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:36.768994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:36.787310  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:36.787336  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:36.856793  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:36.848163    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.849099    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.850911    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.851491    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.853132    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:36.848163    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.849099    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.850911    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.851491    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.853132    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:36.856819  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:36.856834  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:36.909328  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:36.909366  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:36.960708  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:36.960741  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:36.988799  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:36.988826  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:37.020389  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:37.020426  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:39.556036  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:39.567454  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:39.567523  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:39.597767  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:39.597789  319301 cri.go:96] found id: ""
	I1227 20:12:39.597797  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:39.597853  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:39.601347  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:39.601417  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:39.630309  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:39.630330  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:39.630335  319301 cri.go:96] found id: ""
	I1227 20:12:39.630343  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:39.630395  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:39.634109  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:39.637369  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:39.637474  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:39.664492  319301 cri.go:96] found id: ""
	I1227 20:12:39.664515  319301 logs.go:282] 0 containers: []
	W1227 20:12:39.664523  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:39.664536  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:39.664595  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:39.689554  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:39.689585  319301 cri.go:96] found id: ""
	I1227 20:12:39.689594  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:39.689648  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:39.693184  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:39.693251  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:39.719030  319301 cri.go:96] found id: ""
	I1227 20:12:39.719057  319301 logs.go:282] 0 containers: []
	W1227 20:12:39.719066  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:39.719073  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:39.719131  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:39.751945  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:39.751967  319301 cri.go:96] found id: ""
	I1227 20:12:39.751976  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:39.752058  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:39.755910  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:39.755984  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:39.787281  319301 cri.go:96] found id: ""
	I1227 20:12:39.787306  319301 logs.go:282] 0 containers: []
	W1227 20:12:39.787315  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:39.787329  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:39.787341  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:39.818112  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:39.818181  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:39.877195  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:39.877228  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:39.902875  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:39.902908  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:39.933383  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:39.933411  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:39.964696  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:39.964725  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:40.094427  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:40.094546  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:40.115127  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:40.115169  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:40.188369  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:40.178140    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.178935    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.180929    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.181956    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.182727    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:40.178140    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.178935    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.180929    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.181956    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.182727    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:40.188403  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:40.188417  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:40.248250  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:40.248293  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:42.832956  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:42.843630  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:42.843716  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:42.880632  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:42.880654  319301 cri.go:96] found id: ""
	I1227 20:12:42.880662  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:42.880716  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:42.884197  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:42.884283  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:42.912329  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:42.912351  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:42.912356  319301 cri.go:96] found id: ""
	I1227 20:12:42.912363  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:42.912420  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:42.919733  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:42.924460  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:42.924555  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:42.950089  319301 cri.go:96] found id: ""
	I1227 20:12:42.950112  319301 logs.go:282] 0 containers: []
	W1227 20:12:42.950120  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:42.950126  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:42.950186  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:42.982372  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:42.982393  319301 cri.go:96] found id: ""
	I1227 20:12:42.982400  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:42.982454  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:42.985981  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:42.986048  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:43.025247  319301 cri.go:96] found id: ""
	I1227 20:12:43.025270  319301 logs.go:282] 0 containers: []
	W1227 20:12:43.025279  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:43.025285  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:43.025345  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:43.051039  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:43.051058  319301 cri.go:96] found id: ""
	I1227 20:12:43.051066  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:43.051128  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:43.055686  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:43.055774  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:43.080239  319301 cri.go:96] found id: ""
	I1227 20:12:43.080305  319301 logs.go:282] 0 containers: []
	W1227 20:12:43.080328  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:43.080365  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:43.080392  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:43.117618  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:43.117647  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:43.203203  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:43.203243  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:43.233482  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:43.233514  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:43.331030  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:43.331068  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:43.400596  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:43.391562    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.392218    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.393995    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.395389    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.396936    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:43.391562    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.392218    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.393995    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.395389    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.396936    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:43.400620  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:43.400635  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:43.451280  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:43.451316  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:43.469068  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:43.469097  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:43.497581  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:43.497607  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:43.541271  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:43.541307  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:46.066721  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:46.077342  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:46.077418  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:46.106073  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:46.106096  319301 cri.go:96] found id: ""
	I1227 20:12:46.106105  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:46.106161  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:46.110573  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:46.110647  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:46.141403  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:46.141426  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:46.141431  319301 cri.go:96] found id: ""
	I1227 20:12:46.141438  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:46.141524  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:46.146711  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:46.150119  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:46.150207  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:46.177378  319301 cri.go:96] found id: ""
	I1227 20:12:46.177403  319301 logs.go:282] 0 containers: []
	W1227 20:12:46.177411  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:46.177418  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:46.177523  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:46.203465  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:46.203488  319301 cri.go:96] found id: ""
	I1227 20:12:46.203497  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:46.203554  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:46.207163  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:46.207260  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:46.232721  319301 cri.go:96] found id: ""
	I1227 20:12:46.232748  319301 logs.go:282] 0 containers: []
	W1227 20:12:46.232757  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:46.232764  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:46.232849  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:46.260899  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:46.260924  319301 cri.go:96] found id: ""
	I1227 20:12:46.260933  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:46.261004  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:46.264880  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:46.264994  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:46.294702  319301 cri.go:96] found id: ""
	I1227 20:12:46.294772  319301 logs.go:282] 0 containers: []
	W1227 20:12:46.294788  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:46.294802  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:46.294815  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:46.392870  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:46.392907  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:46.411136  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:46.411165  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:46.442076  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:46.442105  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:46.507864  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:46.500419    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.500963    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.502621    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.503081    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.504499    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:46.500419    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.500963    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.502621    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.503081    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.504499    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:46.507887  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:46.507900  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:46.534504  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:46.534534  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:46.599046  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:46.599082  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:46.644197  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:46.644234  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:46.674716  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:46.674743  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:46.703463  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:46.703492  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:49.285570  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:49.295868  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:49.295960  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:49.323445  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:49.323469  319301 cri.go:96] found id: ""
	I1227 20:12:49.323477  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:49.323567  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:49.327039  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:49.327106  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:49.353757  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:49.353781  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:49.353787  319301 cri.go:96] found id: ""
	I1227 20:12:49.353794  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:49.353854  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:49.360531  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:49.364480  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:49.364568  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:49.392254  319301 cri.go:96] found id: ""
	I1227 20:12:49.392325  319301 logs.go:282] 0 containers: []
	W1227 20:12:49.392349  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:49.392374  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:49.392458  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:49.422197  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:49.422218  319301 cri.go:96] found id: ""
	I1227 20:12:49.422226  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:49.422279  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:49.425742  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:49.425813  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:49.451624  319301 cri.go:96] found id: ""
	I1227 20:12:49.451650  319301 logs.go:282] 0 containers: []
	W1227 20:12:49.451659  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:49.451665  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:49.451725  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:49.477813  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:49.477836  319301 cri.go:96] found id: ""
	I1227 20:12:49.477846  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:49.477911  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:49.481531  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:49.481625  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:49.507374  319301 cri.go:96] found id: ""
	I1227 20:12:49.507400  319301 logs.go:282] 0 containers: []
	W1227 20:12:49.507409  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:49.507425  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:49.507438  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:49.598294  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:49.598336  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:49.636279  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:49.636307  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:49.707651  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:49.707686  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:49.765937  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:49.765972  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:49.783282  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:49.783310  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:49.868264  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:49.856321    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.857001    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.858772    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.863251    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.863608    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:49.856321    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.857001    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.858772    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.863251    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.863608    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:49.868294  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:49.868307  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:49.894496  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:49.894524  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:49.919827  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:49.919864  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:50.000367  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:50.000443  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:52.556360  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:52.566511  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:52.566580  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:52.593484  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:52.593517  319301 cri.go:96] found id: ""
	I1227 20:12:52.593527  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:52.593640  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:52.597279  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:52.597349  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:52.623469  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:52.623547  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:52.623568  319301 cri.go:96] found id: ""
	I1227 20:12:52.623591  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:52.623659  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:52.627305  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:52.630834  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:52.630949  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:52.657093  319301 cri.go:96] found id: ""
	I1227 20:12:52.657120  319301 logs.go:282] 0 containers: []
	W1227 20:12:52.657130  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:52.657136  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:52.657201  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:52.683396  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:52.683470  319301 cri.go:96] found id: ""
	I1227 20:12:52.683487  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:52.683556  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:52.687311  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:52.687381  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:52.716233  319301 cri.go:96] found id: ""
	I1227 20:12:52.716257  319301 logs.go:282] 0 containers: []
	W1227 20:12:52.716266  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:52.716273  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:52.716333  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:52.742458  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:52.742482  319301 cri.go:96] found id: ""
	I1227 20:12:52.742491  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:52.742547  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:52.746498  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:52.746629  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:52.771746  319301 cri.go:96] found id: ""
	I1227 20:12:52.771772  319301 logs.go:282] 0 containers: []
	W1227 20:12:52.771781  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:52.771820  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:52.771837  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:52.824894  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:52.824929  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:52.854289  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:52.854318  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:52.889855  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:52.889887  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:52.993260  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:52.993294  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:53.038574  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:53.038617  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:53.071005  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:53.071035  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:53.149881  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:53.149919  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:53.167391  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:53.167547  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:53.240789  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:53.230138    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.230860    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.232667    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.233277    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.236557    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:53.230138    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.230860    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.232667    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.233277    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.236557    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:53.240810  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:53.240823  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:55.779743  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:55.790606  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:55.790677  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:55.817091  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:55.817112  319301 cri.go:96] found id: ""
	I1227 20:12:55.817121  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:55.817176  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:55.820799  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:55.820876  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:55.850874  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:55.850897  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:55.850903  319301 cri.go:96] found id: ""
	I1227 20:12:55.850911  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:55.850964  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:55.854708  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:55.858278  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:55.858347  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:55.887432  319301 cri.go:96] found id: ""
	I1227 20:12:55.887456  319301 logs.go:282] 0 containers: []
	W1227 20:12:55.887465  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:55.887471  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:55.887526  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:55.914817  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:55.914839  319301 cri.go:96] found id: ""
	I1227 20:12:55.914847  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:55.914903  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:55.918494  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:55.918571  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:55.948625  319301 cri.go:96] found id: ""
	I1227 20:12:55.948648  319301 logs.go:282] 0 containers: []
	W1227 20:12:55.948657  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:55.948664  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:55.948733  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:55.984844  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:55.984867  319301 cri.go:96] found id: ""
	I1227 20:12:55.984875  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:55.984930  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:55.988564  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:55.988652  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:56.016926  319301 cri.go:96] found id: ""
	I1227 20:12:56.016956  319301 logs.go:282] 0 containers: []
	W1227 20:12:56.016966  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:56.016982  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:56.016994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:56.118289  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:56.118325  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:56.136502  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:56.136532  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:56.169081  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:56.169108  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:56.211041  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:56.211076  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:56.243209  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:56.243244  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:56.314060  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:56.305651    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.306321    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.307810    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.308362    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.310021    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:56.305651    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.306321    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.307810    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.308362    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.310021    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:56.314082  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:56.314098  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:56.377302  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:56.377341  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:56.410912  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:56.410991  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:56.438190  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:56.438218  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:59.018860  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:59.029806  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:59.029879  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:59.058607  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:59.058631  319301 cri.go:96] found id: ""
	I1227 20:12:59.058640  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:59.058697  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:59.062467  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:59.062544  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:59.091353  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:59.091376  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:59.091382  319301 cri.go:96] found id: ""
	I1227 20:12:59.091389  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:59.091445  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:59.095198  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:59.100058  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:59.100137  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:59.126292  319301 cri.go:96] found id: ""
	I1227 20:12:59.126317  319301 logs.go:282] 0 containers: []
	W1227 20:12:59.126326  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:59.126333  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:59.126397  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:59.155155  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:59.155177  319301 cri.go:96] found id: ""
	I1227 20:12:59.155186  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:59.155242  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:59.158920  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:59.158992  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:59.189092  319301 cri.go:96] found id: ""
	I1227 20:12:59.189159  319301 logs.go:282] 0 containers: []
	W1227 20:12:59.189181  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:59.189206  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:59.189294  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:59.216198  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:59.216262  319301 cri.go:96] found id: ""
	I1227 20:12:59.216285  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:59.216377  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:59.224385  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:59.224486  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:59.252259  319301 cri.go:96] found id: ""
	I1227 20:12:59.252285  319301 logs.go:282] 0 containers: []
	W1227 20:12:59.252294  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:59.252309  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:59.252342  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:59.273005  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:59.273034  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:59.301850  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:59.301881  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:59.356187  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:59.356221  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:59.399819  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:59.399852  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:59.433910  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:59.433941  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:59.513398  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:59.513432  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:59.549380  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:59.549409  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:59.623298  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:59.615506    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.615904    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.617387    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.618024    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.619495    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:59.615506    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.615904    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.617387    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.618024    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.619495    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:59.623322  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:59.623336  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:59.649178  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:59.649207  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:02.243275  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:02.254105  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:02.254177  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:02.286583  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:02.286605  319301 cri.go:96] found id: ""
	I1227 20:13:02.286613  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:02.286669  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:02.290640  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:02.290708  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:02.317723  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:02.317746  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:02.317752  319301 cri.go:96] found id: ""
	I1227 20:13:02.317760  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:02.317817  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:02.322227  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:02.325742  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:02.325814  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:02.352306  319301 cri.go:96] found id: ""
	I1227 20:13:02.352333  319301 logs.go:282] 0 containers: []
	W1227 20:13:02.352342  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:02.352349  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:02.352409  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:02.378873  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:02.378896  319301 cri.go:96] found id: ""
	I1227 20:13:02.378906  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:02.378961  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:02.383556  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:02.383681  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:02.421495  319301 cri.go:96] found id: ""
	I1227 20:13:02.421526  319301 logs.go:282] 0 containers: []
	W1227 20:13:02.421550  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:02.421579  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:02.421661  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:02.454963  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:02.454985  319301 cri.go:96] found id: ""
	I1227 20:13:02.454994  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:02.455071  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:02.458781  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:02.458901  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:02.488822  319301 cri.go:96] found id: ""
	I1227 20:13:02.488848  319301 logs.go:282] 0 containers: []
	W1227 20:13:02.488857  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:02.488872  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:02.488904  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:02.513914  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:02.513945  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:02.543786  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:02.543815  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:02.602843  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:02.602877  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:02.634221  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:02.634257  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:02.736305  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:02.736347  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:02.812827  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:02.803912    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.804866    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.806654    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.807254    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.808858    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:02.803912    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.804866    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.806654    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.807254    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.808858    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:02.812848  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:02.812861  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:02.870730  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:02.870770  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:02.896826  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:02.896857  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:02.928575  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:02.928604  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:05.512539  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:05.522703  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:05.522777  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:05.549167  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:05.549187  319301 cri.go:96] found id: ""
	I1227 20:13:05.549195  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:05.549252  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:05.553114  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:05.553224  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:05.591305  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:05.591329  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:05.591334  319301 cri.go:96] found id: ""
	I1227 20:13:05.591342  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:05.591399  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:05.595292  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:05.598966  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:05.599090  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:05.626541  319301 cri.go:96] found id: ""
	I1227 20:13:05.626567  319301 logs.go:282] 0 containers: []
	W1227 20:13:05.626576  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:05.626583  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:05.626644  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:05.658675  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:05.658707  319301 cri.go:96] found id: ""
	I1227 20:13:05.658715  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:05.658771  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:05.662500  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:05.662571  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:05.694208  319301 cri.go:96] found id: ""
	I1227 20:13:05.694232  319301 logs.go:282] 0 containers: []
	W1227 20:13:05.694241  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:05.694248  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:05.694310  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:05.721109  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:05.721133  319301 cri.go:96] found id: ""
	I1227 20:13:05.721152  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:05.721212  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:05.724940  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:05.725010  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:05.751566  319301 cri.go:96] found id: ""
	I1227 20:13:05.751594  319301 logs.go:282] 0 containers: []
	W1227 20:13:05.751604  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:05.751643  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:05.751660  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:05.849663  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:05.849750  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:05.868576  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:05.868607  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:05.934428  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:05.925753    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.926400    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.928037    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.928648    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.930245    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:05.925753    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.926400    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.928037    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.928648    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.930245    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:05.934452  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:05.934466  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:05.965352  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:05.965378  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:06.020452  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:06.020494  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:06.054720  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:06.054750  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:06.084316  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:06.084346  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:06.166870  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:06.166934  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:06.221058  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:06.221095  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:08.753099  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:08.764525  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:08.764592  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:08.790692  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:08.790714  319301 cri.go:96] found id: ""
	I1227 20:13:08.790725  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:08.790781  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:08.794565  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:08.794679  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:08.820711  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:08.820730  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:08.820734  319301 cri.go:96] found id: ""
	I1227 20:13:08.820741  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:08.820797  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:08.824460  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:08.827902  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:08.827991  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:08.869147  319301 cri.go:96] found id: ""
	I1227 20:13:08.869171  319301 logs.go:282] 0 containers: []
	W1227 20:13:08.869184  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:08.869190  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:08.869273  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:08.897503  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:08.897528  319301 cri.go:96] found id: ""
	I1227 20:13:08.897545  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:08.897605  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:08.902138  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:08.902257  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:08.931144  319301 cri.go:96] found id: ""
	I1227 20:13:08.931168  319301 logs.go:282] 0 containers: []
	W1227 20:13:08.931177  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:08.931183  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:08.931240  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:08.958779  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:08.958802  319301 cri.go:96] found id: ""
	I1227 20:13:08.958810  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:08.958892  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:08.962888  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:08.962966  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:08.991222  319301 cri.go:96] found id: ""
	I1227 20:13:08.991248  319301 logs.go:282] 0 containers: []
	W1227 20:13:08.991257  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:08.991270  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:08.991310  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:09.009225  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:09.009256  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:09.081569  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:09.073722    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.074157    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.075724    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.076257    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.078038    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:09.073722    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.074157    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.075724    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.076257    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.078038    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:09.081592  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:09.081608  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:09.112754  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:09.112780  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:09.163779  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:09.163815  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:09.189441  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:09.189512  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:09.271488  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:09.271569  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:09.314936  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:09.314962  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:09.413305  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:09.413344  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:09.465609  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:09.465639  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:12.002552  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:12.014182  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:12.014264  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:12.052377  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:12.052400  319301 cri.go:96] found id: ""
	I1227 20:13:12.052409  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:12.052466  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:12.056292  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:12.056394  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:12.085743  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:12.085765  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:12.085770  319301 cri.go:96] found id: ""
	I1227 20:13:12.085778  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:12.085835  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:12.089812  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:12.093801  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:12.093896  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:12.122289  319301 cri.go:96] found id: ""
	I1227 20:13:12.122359  319301 logs.go:282] 0 containers: []
	W1227 20:13:12.122386  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:12.122402  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:12.122476  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:12.149731  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:12.149758  319301 cri.go:96] found id: ""
	I1227 20:13:12.149767  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:12.149823  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:12.153602  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:12.153688  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:12.178711  319301 cri.go:96] found id: ""
	I1227 20:13:12.178786  319301 logs.go:282] 0 containers: []
	W1227 20:13:12.178808  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:12.178832  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:12.178917  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:12.205322  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:12.205350  319301 cri.go:96] found id: ""
	I1227 20:13:12.205360  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:12.205414  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:12.209024  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:12.209091  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:12.234488  319301 cri.go:96] found id: ""
	I1227 20:13:12.234557  319301 logs.go:282] 0 containers: []
	W1227 20:13:12.234582  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:12.234609  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:12.234640  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:12.261610  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:12.261639  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:12.315635  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:12.315673  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:12.376280  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:12.376313  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:12.402133  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:12.402165  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:12.430982  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:12.431051  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:12.512045  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:12.512078  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:12.530685  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:12.530716  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:12.568375  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:12.568405  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:12.668785  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:12.668822  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:12.735523  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:12.727415    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.728180    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.729943    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.730267    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.732211    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:12.727415    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.728180    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.729943    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.730267    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.732211    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:15.236014  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:15.247391  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:15.247466  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:15.277268  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:15.277342  319301 cri.go:96] found id: ""
	I1227 20:13:15.277365  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:15.277488  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:15.282305  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:15.282373  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:15.312415  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:15.312436  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:15.312441  319301 cri.go:96] found id: ""
	I1227 20:13:15.312449  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:15.312503  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:15.316541  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:15.319901  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:15.319970  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:15.346399  319301 cri.go:96] found id: ""
	I1227 20:13:15.346424  319301 logs.go:282] 0 containers: []
	W1227 20:13:15.346432  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:15.346439  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:15.346496  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:15.373083  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:15.373104  319301 cri.go:96] found id: ""
	I1227 20:13:15.373112  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:15.373165  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:15.376806  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:15.376918  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:15.401683  319301 cri.go:96] found id: ""
	I1227 20:13:15.401708  319301 logs.go:282] 0 containers: []
	W1227 20:13:15.401717  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:15.401725  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:15.401784  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:15.425772  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:15.425796  319301 cri.go:96] found id: ""
	I1227 20:13:15.425804  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:15.425865  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:15.429359  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:15.429426  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:15.457327  319301 cri.go:96] found id: ""
	I1227 20:13:15.457352  319301 logs.go:282] 0 containers: []
	W1227 20:13:15.457361  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:15.457374  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:15.457387  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:15.499826  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:15.499863  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:15.530003  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:15.530040  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:15.557784  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:15.557811  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:15.637950  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:15.637987  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:15.706856  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:15.696364    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.696954    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.699252    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.700375    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.701334    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:15.696364    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.696954    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.699252    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.700375    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.701334    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:15.706878  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:15.706893  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:15.742198  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:15.742227  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:15.838586  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:15.838624  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:15.857986  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:15.858016  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:15.889281  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:15.889313  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:18.468232  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:18.478612  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:18.478682  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:18.506032  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:18.506056  319301 cri.go:96] found id: ""
	I1227 20:13:18.506064  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:18.506116  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:18.509751  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:18.509832  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:18.537503  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:18.537527  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:18.537533  319301 cri.go:96] found id: ""
	I1227 20:13:18.537541  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:18.537645  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:18.543736  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:18.548696  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:18.548770  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:18.574950  319301 cri.go:96] found id: ""
	I1227 20:13:18.574986  319301 logs.go:282] 0 containers: []
	W1227 20:13:18.574996  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:18.575003  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:18.575063  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:18.603311  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:18.603330  319301 cri.go:96] found id: ""
	I1227 20:13:18.603337  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:18.603391  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:18.607317  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:18.607399  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:18.637190  319301 cri.go:96] found id: ""
	I1227 20:13:18.637214  319301 logs.go:282] 0 containers: []
	W1227 20:13:18.637223  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:18.637230  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:18.637290  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:18.664240  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:18.664260  319301 cri.go:96] found id: ""
	I1227 20:13:18.664268  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:18.664323  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:18.667779  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:18.667845  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:18.694174  319301 cri.go:96] found id: ""
	I1227 20:13:18.694198  319301 logs.go:282] 0 containers: []
	W1227 20:13:18.694208  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:18.694222  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:18.694235  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:18.718997  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:18.719027  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:18.745989  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:18.746067  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:18.822381  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:18.822419  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:18.867357  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:18.867387  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:18.970030  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:18.970069  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:18.991124  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:18.991208  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:19.073512  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:19.064985    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.065841    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.067396    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.067963    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.069601    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:19.064985    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.065841    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.067396    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.067963    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.069601    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:19.073537  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:19.073559  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:19.102691  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:19.102717  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:19.156409  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:19.156445  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:21.705847  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:21.716387  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:21.716462  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:21.750665  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:21.750735  319301 cri.go:96] found id: ""
	I1227 20:13:21.750770  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:21.750862  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:21.754653  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:21.754723  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:21.779914  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:21.779938  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:21.779944  319301 cri.go:96] found id: ""
	I1227 20:13:21.779952  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:21.780015  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:21.783993  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:21.787625  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:21.787696  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:21.813514  319301 cri.go:96] found id: ""
	I1227 20:13:21.813543  319301 logs.go:282] 0 containers: []
	W1227 20:13:21.813552  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:21.813559  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:21.813629  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:21.844946  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:21.844968  319301 cri.go:96] found id: ""
	I1227 20:13:21.844976  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:21.845035  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:21.848813  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:21.848884  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:21.874101  319301 cri.go:96] found id: ""
	I1227 20:13:21.874174  319301 logs.go:282] 0 containers: []
	W1227 20:13:21.874190  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:21.874197  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:21.874255  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:21.900432  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:21.900455  319301 cri.go:96] found id: ""
	I1227 20:13:21.900463  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:21.900518  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:21.904020  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:21.904092  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:21.931082  319301 cri.go:96] found id: ""
	I1227 20:13:21.931107  319301 logs.go:282] 0 containers: []
	W1227 20:13:21.931116  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:21.931130  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:21.931173  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:21.977536  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:21.977621  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:22.057131  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:22.057167  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:22.162849  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:22.162890  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:22.181044  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:22.181074  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:22.251501  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:22.243628    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.244178    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.245787    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.246465    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.248081    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:22.243628    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.244178    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.245787    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.246465    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.248081    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:22.251520  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:22.251532  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:22.322039  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:22.322076  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:22.348945  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:22.348981  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:22.376440  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:22.376468  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:22.411192  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:22.411219  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:24.942580  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:24.952758  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:24.952881  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:24.984548  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:24.984572  319301 cri.go:96] found id: ""
	I1227 20:13:24.984580  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:24.984656  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:24.988133  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:24.988203  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:25.026479  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:25.026581  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:25.026603  319301 cri.go:96] found id: ""
	I1227 20:13:25.026645  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:25.026785  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:25.030841  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:25.034716  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:25.034800  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:25.061711  319301 cri.go:96] found id: ""
	I1227 20:13:25.061738  319301 logs.go:282] 0 containers: []
	W1227 20:13:25.061747  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:25.061753  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:25.061810  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:25.089318  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:25.089386  319301 cri.go:96] found id: ""
	I1227 20:13:25.089409  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:25.089517  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:25.093670  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:25.093795  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:25.121407  319301 cri.go:96] found id: ""
	I1227 20:13:25.121525  319301 logs.go:282] 0 containers: []
	W1227 20:13:25.121549  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:25.121569  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:25.121669  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:25.149007  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:25.149080  319301 cri.go:96] found id: ""
	I1227 20:13:25.149103  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:25.149187  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:25.153407  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:25.153596  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:25.179032  319301 cri.go:96] found id: ""
	I1227 20:13:25.179057  319301 logs.go:282] 0 containers: []
	W1227 20:13:25.179066  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:25.179079  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:25.179090  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:25.276200  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:25.276277  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:25.348617  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:25.340243    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.340862    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.343120    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.343588    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.345111    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:25.340243    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.340862    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.343120    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.343588    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.345111    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:25.348638  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:25.348655  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:25.406272  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:25.406306  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:25.452731  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:25.452768  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:25.480251  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:25.480280  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:25.557948  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:25.557985  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:25.593809  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:25.593838  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:25.615397  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:25.615429  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:25.646218  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:25.646248  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:28.174341  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:28.185173  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:28.185244  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:28.211104  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:28.211127  319301 cri.go:96] found id: ""
	I1227 20:13:28.211136  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:28.211191  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:28.214901  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:28.215009  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:28.246215  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:28.246280  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:28.246301  319301 cri.go:96] found id: ""
	I1227 20:13:28.246324  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:28.246405  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:28.250387  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:28.253817  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:28.253888  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:28.287626  319301 cri.go:96] found id: ""
	I1227 20:13:28.287651  319301 logs.go:282] 0 containers: []
	W1227 20:13:28.287659  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:28.287665  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:28.287725  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:28.316933  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:28.316954  319301 cri.go:96] found id: ""
	I1227 20:13:28.316962  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:28.317018  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:28.320933  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:28.321004  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:28.347084  319301 cri.go:96] found id: ""
	I1227 20:13:28.347112  319301 logs.go:282] 0 containers: []
	W1227 20:13:28.347122  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:28.347128  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:28.347185  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:28.378083  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:28.378106  319301 cri.go:96] found id: ""
	I1227 20:13:28.378115  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:28.378169  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:28.382099  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:28.382172  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:28.409209  319301 cri.go:96] found id: ""
	I1227 20:13:28.409235  319301 logs.go:282] 0 containers: []
	W1227 20:13:28.409244  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:28.409257  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:28.409270  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:28.427091  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:28.427120  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:28.490226  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:28.482506    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.483031    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.484594    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.484922    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.486441    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:28.482506    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.483031    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.484594    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.484922    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.486441    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:28.490251  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:28.490265  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:28.531892  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:28.531924  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:28.557604  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:28.557631  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:28.652391  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:28.652428  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:28.680025  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:28.680051  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:28.737147  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:28.737182  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:28.765648  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:28.765682  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:28.843337  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:28.843374  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:31.382818  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:31.393355  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:31.393426  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:31.420305  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:31.420328  319301 cri.go:96] found id: ""
	I1227 20:13:31.420336  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:31.420391  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:31.424001  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:31.424074  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:31.460581  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:31.460615  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:31.460621  319301 cri.go:96] found id: ""
	I1227 20:13:31.460635  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:31.460702  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:31.464544  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:31.468299  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:31.468414  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:31.500491  319301 cri.go:96] found id: ""
	I1227 20:13:31.500517  319301 logs.go:282] 0 containers: []
	W1227 20:13:31.500526  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:31.500533  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:31.500590  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:31.527178  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:31.527203  319301 cri.go:96] found id: ""
	I1227 20:13:31.527211  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:31.527273  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:31.530886  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:31.530980  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:31.558444  319301 cri.go:96] found id: ""
	I1227 20:13:31.558466  319301 logs.go:282] 0 containers: []
	W1227 20:13:31.558475  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:31.558482  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:31.558583  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:31.583987  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:31.584010  319301 cri.go:96] found id: ""
	I1227 20:13:31.584019  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:31.584072  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:31.587656  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:31.587728  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:31.613640  319301 cri.go:96] found id: ""
	I1227 20:13:31.613662  319301 logs.go:282] 0 containers: []
	W1227 20:13:31.613671  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:31.613692  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:31.613708  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:31.642242  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:31.642274  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:31.724401  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:31.724439  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:31.793926  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:31.785945    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.786581    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.788181    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.788659    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.789864    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:31.785945    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.786581    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.788181    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.788659    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.789864    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:31.793989  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:31.794011  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:31.825164  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:31.825193  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:31.877179  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:31.877211  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:31.912284  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:31.912319  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:32.015514  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:32.015558  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:32.034674  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:32.034705  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:32.099008  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:32.099062  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:34.634778  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:34.656177  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:34.656243  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:34.684782  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:34.684801  319301 cri.go:96] found id: ""
	I1227 20:13:34.684810  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:34.684865  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:34.688514  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:34.688585  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:34.712895  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:34.712915  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:34.712921  319301 cri.go:96] found id: ""
	I1227 20:13:34.712928  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:34.712995  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:34.716706  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:34.720270  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:34.720346  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:34.746430  319301 cri.go:96] found id: ""
	I1227 20:13:34.746456  319301 logs.go:282] 0 containers: []
	W1227 20:13:34.746465  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:34.746472  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:34.746530  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:34.773423  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:34.773481  319301 cri.go:96] found id: ""
	I1227 20:13:34.773490  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:34.773560  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:34.777238  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:34.777325  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:34.804429  319301 cri.go:96] found id: ""
	I1227 20:13:34.804455  319301 logs.go:282] 0 containers: []
	W1227 20:13:34.804464  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:34.804471  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:34.804528  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:34.837390  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:34.837412  319301 cri.go:96] found id: ""
	I1227 20:13:34.837421  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:34.837518  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:34.841292  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:34.841362  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:34.882512  319301 cri.go:96] found id: ""
	I1227 20:13:34.882537  319301 logs.go:282] 0 containers: []
	W1227 20:13:34.882547  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:34.882561  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:34.882593  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:34.935722  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:34.935778  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:34.963786  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:34.963815  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:35.068786  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:35.068824  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:35.118359  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:35.118402  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:35.146117  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:35.146144  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:35.223101  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:35.223145  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:35.255059  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:35.255089  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:35.276475  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:35.276510  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:35.351174  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:35.342460    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.343305    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.344856    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.345617    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.347573    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:35.342460    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.343305    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.344856    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.345617    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.347573    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:35.351239  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:35.351268  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:37.881796  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:37.894482  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:37.894556  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:37.924732  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:37.924756  319301 cri.go:96] found id: ""
	I1227 20:13:37.924765  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:37.924821  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:37.928636  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:37.928711  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:37.956752  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:37.956775  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:37.956781  319301 cri.go:96] found id: ""
	I1227 20:13:37.956801  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:37.956860  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:37.960536  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:37.964778  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:37.964879  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:37.998167  319301 cri.go:96] found id: ""
	I1227 20:13:37.998192  319301 logs.go:282] 0 containers: []
	W1227 20:13:37.998202  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:37.998208  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:37.998268  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:38.027828  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:38.027903  319301 cri.go:96] found id: ""
	I1227 20:13:38.027928  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:38.028019  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:38.032285  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:38.032374  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:38.063193  319301 cri.go:96] found id: ""
	I1227 20:13:38.063219  319301 logs.go:282] 0 containers: []
	W1227 20:13:38.063238  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:38.063277  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:38.063338  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:38.100160  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:38.100184  319301 cri.go:96] found id: ""
	I1227 20:13:38.100192  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:38.100248  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:38.104272  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:38.104360  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:38.132286  319301 cri.go:96] found id: ""
	I1227 20:13:38.132319  319301 logs.go:282] 0 containers: []
	W1227 20:13:38.132329  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:38.132343  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:38.132355  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:38.163697  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:38.163723  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:38.181632  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:38.181662  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:38.210225  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:38.210258  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:38.255805  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:38.255842  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:38.358465  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:38.358500  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:38.425713  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:38.417673    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.418194    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.420263    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.420756    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.422182    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:38.417673    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.418194    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.420263    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.420756    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.422182    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:38.425743  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:38.425766  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:38.481423  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:38.481466  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:38.506752  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:38.506783  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:38.536076  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:38.536104  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:41.112032  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:41.122203  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:41.122272  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:41.147769  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:41.147833  319301 cri.go:96] found id: ""
	I1227 20:13:41.147858  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:41.147945  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:41.151581  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:41.151651  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:41.176060  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:41.176078  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:41.176082  319301 cri.go:96] found id: ""
	I1227 20:13:41.176090  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:41.176144  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:41.179877  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:41.183247  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:41.183311  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:41.212692  319301 cri.go:96] found id: ""
	I1227 20:13:41.212717  319301 logs.go:282] 0 containers: []
	W1227 20:13:41.212727  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:41.212733  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:41.212814  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:41.237313  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:41.237335  319301 cri.go:96] found id: ""
	I1227 20:13:41.237343  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:41.237429  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:41.241432  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:41.241552  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:41.274168  319301 cri.go:96] found id: ""
	I1227 20:13:41.274196  319301 logs.go:282] 0 containers: []
	W1227 20:13:41.274206  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:41.274212  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:41.274295  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:41.300597  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:41.300620  319301 cri.go:96] found id: ""
	I1227 20:13:41.300628  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:41.300702  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:41.304360  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:41.304466  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:41.330795  319301 cri.go:96] found id: ""
	I1227 20:13:41.330819  319301 logs.go:282] 0 containers: []
	W1227 20:13:41.330828  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:41.330860  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:41.330885  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:41.358931  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:41.358960  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:41.383514  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:41.383539  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:41.469734  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:41.469771  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:41.573372  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:41.573411  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:41.591886  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:41.591916  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:41.674483  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:41.665884    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.666635    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.667427    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.669130    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.669864    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:41.665884    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.666635    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.667427    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.669130    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.669864    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:41.674507  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:41.674521  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:41.756704  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:41.756741  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:41.803676  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:41.803709  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:41.838752  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:41.838785  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:44.371993  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:44.382732  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:44.382811  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:44.408302  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:44.408324  319301 cri.go:96] found id: ""
	I1227 20:13:44.408332  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:44.408387  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:44.411908  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:44.411977  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:44.438505  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:44.438537  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:44.438543  319301 cri.go:96] found id: ""
	I1227 20:13:44.438551  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:44.438612  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:44.443020  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:44.446843  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:44.446907  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:44.473249  319301 cri.go:96] found id: ""
	I1227 20:13:44.473273  319301 logs.go:282] 0 containers: []
	W1227 20:13:44.473282  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:44.473288  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:44.473344  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:44.506635  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:44.506657  319301 cri.go:96] found id: ""
	I1227 20:13:44.506665  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:44.506719  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:44.510255  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:44.510327  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:44.535681  319301 cri.go:96] found id: ""
	I1227 20:13:44.535706  319301 logs.go:282] 0 containers: []
	W1227 20:13:44.535715  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:44.535722  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:44.535779  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:44.566431  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:44.566454  319301 cri.go:96] found id: ""
	I1227 20:13:44.566463  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:44.566544  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:44.570308  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:44.570429  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:44.596900  319301 cri.go:96] found id: ""
	I1227 20:13:44.596925  319301 logs.go:282] 0 containers: []
	W1227 20:13:44.596935  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:44.596969  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:44.596988  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:44.641306  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:44.641338  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:44.670860  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:44.670887  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:44.698228  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:44.698303  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:44.781609  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:44.781645  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:44.832828  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:44.832857  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:44.851403  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:44.851434  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:44.883766  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:44.883796  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:44.982715  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:44.982754  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:45.102278  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:45.090748   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.091715   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.092803   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.093981   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.094942   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:45.090748   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.091715   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.092803   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.093981   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.094942   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:45.102308  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:45.102333  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:47.711741  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:47.722289  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:47.722355  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:47.752456  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:47.752475  319301 cri.go:96] found id: ""
	I1227 20:13:47.752483  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:47.752545  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:47.756223  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:47.756290  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:47.781994  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:47.782016  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:47.782021  319301 cri.go:96] found id: ""
	I1227 20:13:47.782029  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:47.782082  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:47.785803  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:47.789134  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:47.789202  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:47.819133  319301 cri.go:96] found id: ""
	I1227 20:13:47.819166  319301 logs.go:282] 0 containers: []
	W1227 20:13:47.819176  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:47.819188  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:47.819261  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:47.848513  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:47.848534  319301 cri.go:96] found id: ""
	I1227 20:13:47.848542  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:47.848602  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:47.852477  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:47.852545  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:47.879163  319301 cri.go:96] found id: ""
	I1227 20:13:47.879188  319301 logs.go:282] 0 containers: []
	W1227 20:13:47.879198  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:47.879204  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:47.879288  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:47.906400  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:47.906422  319301 cri.go:96] found id: ""
	I1227 20:13:47.906430  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:47.906487  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:47.910061  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:47.910142  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:47.936751  319301 cri.go:96] found id: ""
	I1227 20:13:47.936822  319301 logs.go:282] 0 containers: []
	W1227 20:13:47.936855  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:47.936885  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:47.936928  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:48.041904  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:48.041941  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:48.059753  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:48.059783  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:48.091794  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:48.091825  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:48.119314  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:48.119341  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:48.167631  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:48.167656  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:48.236954  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:48.226933   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.228070   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.229057   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.230849   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.231433   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:48.226933   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.228070   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.229057   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.230849   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.231433   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:48.236978  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:48.236992  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:48.266604  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:48.266634  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:48.326691  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:48.326727  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:48.370030  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:48.370062  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:50.950604  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:50.960973  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:50.961044  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:50.989711  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:50.989734  319301 cri.go:96] found id: ""
	I1227 20:13:50.989743  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:50.989813  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:50.993765  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:50.993882  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:51.024930  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:51.024955  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:51.024976  319301 cri.go:96] found id: ""
	I1227 20:13:51.025000  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:51.025060  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:51.029133  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:51.034041  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:51.034136  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:51.061567  319301 cri.go:96] found id: ""
	I1227 20:13:51.061590  319301 logs.go:282] 0 containers: []
	W1227 20:13:51.061599  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:51.061608  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:51.061673  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:51.090737  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:51.090764  319301 cri.go:96] found id: ""
	I1227 20:13:51.090773  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:51.090847  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:51.095345  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:51.095432  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:51.123208  319301 cri.go:96] found id: ""
	I1227 20:13:51.123244  319301 logs.go:282] 0 containers: []
	W1227 20:13:51.123254  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:51.123260  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:51.123334  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:51.154295  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:51.154317  319301 cri.go:96] found id: ""
	I1227 20:13:51.154325  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:51.154407  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:51.158410  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:51.158485  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:51.189846  319301 cri.go:96] found id: ""
	I1227 20:13:51.189882  319301 logs.go:282] 0 containers: []
	W1227 20:13:51.189896  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:51.189909  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:51.189921  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:51.286819  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:51.286858  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:51.305366  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:51.305393  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:51.380305  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:51.380343  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:51.441677  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:51.441710  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:51.481914  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:51.481949  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:51.547090  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:51.539048   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.539678   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.541335   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.541928   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.543466   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:51.539048   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.539678   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.541335   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.541928   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.543466   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:51.547154  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:51.547176  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:51.578696  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:51.578725  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:51.608004  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:51.608032  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:51.636360  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:51.636391  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:54.212415  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:54.222852  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:54.222923  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:54.251561  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:54.251580  319301 cri.go:96] found id: ""
	I1227 20:13:54.251587  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:54.251645  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:54.255279  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:54.255354  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:54.292682  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:54.292706  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:54.292711  319301 cri.go:96] found id: ""
	I1227 20:13:54.292719  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:54.292781  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:54.296595  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:54.300085  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:54.300159  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:54.326489  319301 cri.go:96] found id: ""
	I1227 20:13:54.326555  319301 logs.go:282] 0 containers: []
	W1227 20:13:54.326579  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:54.326605  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:54.326696  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:54.353313  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:54.353338  319301 cri.go:96] found id: ""
	I1227 20:13:54.353347  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:54.353403  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:54.356927  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:54.356999  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:54.381581  319301 cri.go:96] found id: ""
	I1227 20:13:54.381617  319301 logs.go:282] 0 containers: []
	W1227 20:13:54.381626  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:54.381633  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:54.381691  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:54.414363  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:54.414383  319301 cri.go:96] found id: ""
	I1227 20:13:54.414391  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:54.414446  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:54.418045  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:54.418114  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:54.449206  319301 cri.go:96] found id: ""
	I1227 20:13:54.449229  319301 logs.go:282] 0 containers: []
	W1227 20:13:54.449238  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:54.449252  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:54.449264  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:54.517227  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:54.508584   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.509203   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.510795   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.511388   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.512826   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:54.508584   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.509203   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.510795   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.511388   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.512826   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:54.517253  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:54.517266  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:54.544360  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:54.544391  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:54.599513  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:54.599547  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:54.644818  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:54.644847  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:54.688568  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:54.688609  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:54.713724  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:54.713751  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:54.741842  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:54.741868  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:54.820175  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:54.820209  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:54.925045  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:54.925099  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:57.443738  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:57.454148  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:57.454219  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:57.484004  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:57.484071  319301 cri.go:96] found id: ""
	I1227 20:13:57.484087  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:57.484154  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:57.487937  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:57.488009  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:57.513954  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:57.513978  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:57.513983  319301 cri.go:96] found id: ""
	I1227 20:13:57.513991  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:57.514048  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:57.517734  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:57.521248  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:57.521322  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:57.548709  319301 cri.go:96] found id: ""
	I1227 20:13:57.548734  319301 logs.go:282] 0 containers: []
	W1227 20:13:57.548743  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:57.548749  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:57.548807  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:57.574830  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:57.574853  319301 cri.go:96] found id: ""
	I1227 20:13:57.574862  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:57.574919  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:57.578643  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:57.578770  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:57.604928  319301 cri.go:96] found id: ""
	I1227 20:13:57.604952  319301 logs.go:282] 0 containers: []
	W1227 20:13:57.604961  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:57.604967  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:57.605037  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:57.636096  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:57.636118  319301 cri.go:96] found id: ""
	I1227 20:13:57.636126  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:57.636181  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:57.640206  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:57.640289  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:57.664867  319301 cri.go:96] found id: ""
	I1227 20:13:57.664893  319301 logs.go:282] 0 containers: []
	W1227 20:13:57.664903  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:57.664918  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:57.664930  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:57.760571  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:57.760614  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:57.779034  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:57.779063  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:57.860979  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:57.853801   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.854291   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.855825   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.856219   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.857717   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:57.853801   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.854291   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.855825   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.856219   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.857717   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:57.861005  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:57.861030  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:57.891248  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:57.891279  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:57.951146  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:57.951184  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:57.983957  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:57.983983  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:58.027711  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:58.027751  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:58.057942  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:58.057967  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:58.134700  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:58.134737  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:00.665876  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:00.676353  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:00.676426  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:00.704251  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:00.704274  319301 cri.go:96] found id: ""
	I1227 20:14:00.704284  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:00.704369  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:00.708101  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:00.708172  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:00.744575  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:00.744598  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:00.744602  319301 cri.go:96] found id: ""
	I1227 20:14:00.744610  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:00.744681  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:00.748672  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:00.752393  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:00.752495  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:00.778438  319301 cri.go:96] found id: ""
	I1227 20:14:00.778463  319301 logs.go:282] 0 containers: []
	W1227 20:14:00.778472  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:00.778478  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:00.778568  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:00.804119  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:00.804143  319301 cri.go:96] found id: ""
	I1227 20:14:00.804152  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:00.804243  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:00.807914  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:00.808018  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:00.837548  319301 cri.go:96] found id: ""
	I1227 20:14:00.837626  319301 logs.go:282] 0 containers: []
	W1227 20:14:00.837640  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:00.837648  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:00.837723  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:00.864504  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:00.864527  319301 cri.go:96] found id: ""
	I1227 20:14:00.864535  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:00.864590  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:00.868408  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:00.868482  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:00.897150  319301 cri.go:96] found id: ""
	I1227 20:14:00.897173  319301 logs.go:282] 0 containers: []
	W1227 20:14:00.897182  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:00.897197  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:00.897210  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:00.998644  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:00.998688  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:01.021375  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:01.021415  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:01.054456  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:01.054487  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:01.115661  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:01.115700  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:01.161388  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:01.161423  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:01.192518  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:01.192549  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:01.275490  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:01.275523  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:01.341916  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:01.334014   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.334408   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.335994   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.336428   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.337960   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:01.334014   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.334408   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.335994   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.336428   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.337960   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:01.341937  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:01.341950  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:01.368174  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:01.368205  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:03.909559  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:03.920151  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:03.920223  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:03.950304  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:03.950321  319301 cri.go:96] found id: ""
	I1227 20:14:03.950329  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:03.950383  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:03.954284  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:03.954356  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:03.991836  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:03.991917  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:03.991937  319301 cri.go:96] found id: ""
	I1227 20:14:03.991960  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:03.992044  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:03.996532  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:04.000198  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:04.000315  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:04.031549  319301 cri.go:96] found id: ""
	I1227 20:14:04.031622  319301 logs.go:282] 0 containers: []
	W1227 20:14:04.031647  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:04.031671  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:04.031765  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:04.060260  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:04.060328  319301 cri.go:96] found id: ""
	I1227 20:14:04.060356  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:04.060444  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:04.064496  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:04.064588  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:04.102911  319301 cri.go:96] found id: ""
	I1227 20:14:04.103013  319301 logs.go:282] 0 containers: []
	W1227 20:14:04.103124  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:04.103169  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:04.103319  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:04.131147  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:04.131212  319301 cri.go:96] found id: ""
	I1227 20:14:04.131234  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:04.131327  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:04.135698  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:04.135819  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:04.164124  319301 cri.go:96] found id: ""
	I1227 20:14:04.164202  319301 logs.go:282] 0 containers: []
	W1227 20:14:04.164224  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:04.164266  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:04.164297  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:04.182491  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:04.182521  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:04.211036  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:04.211068  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:04.256784  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:04.256821  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:04.348299  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:04.348336  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:04.450573  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:04.450613  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:04.516283  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:04.506999   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.507835   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.510141   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.510856   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.512527   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:04.506999   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.507835   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.510141   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.510856   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.512527   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:04.516305  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:04.516319  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:04.576841  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:04.576872  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:04.614008  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:04.614035  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:04.641690  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:04.641719  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:07.176073  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:07.186712  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:07.186783  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:07.211686  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:07.211709  319301 cri.go:96] found id: ""
	I1227 20:14:07.211718  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:07.211775  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:07.215681  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:07.215756  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:07.240540  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:07.240563  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:07.240569  319301 cri.go:96] found id: ""
	I1227 20:14:07.240577  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:07.240630  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:07.245279  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:07.249179  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:07.249250  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:07.276774  319301 cri.go:96] found id: ""
	I1227 20:14:07.276800  319301 logs.go:282] 0 containers: []
	W1227 20:14:07.276810  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:07.276816  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:07.276873  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:07.304802  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:07.304821  319301 cri.go:96] found id: ""
	I1227 20:14:07.304829  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:07.304883  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:07.308534  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:07.308604  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:07.336318  319301 cri.go:96] found id: ""
	I1227 20:14:07.336344  319301 logs.go:282] 0 containers: []
	W1227 20:14:07.336354  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:07.336360  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:07.336423  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:07.362751  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:07.362771  319301 cri.go:96] found id: ""
	I1227 20:14:07.362780  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:07.362840  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:07.366846  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:07.366918  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:07.395130  319301 cri.go:96] found id: ""
	I1227 20:14:07.395152  319301 logs.go:282] 0 containers: []
	W1227 20:14:07.395161  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:07.395175  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:07.395187  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:07.491440  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:07.491518  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:07.527740  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:07.527770  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:07.558436  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:07.558464  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:07.588229  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:07.588259  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:07.607165  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:07.607197  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:07.677755  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:07.668928   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.669821   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.671526   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.672177   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.673864   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:07.668928   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.669821   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.671526   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.672177   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.673864   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:07.677777  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:07.677791  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:07.739114  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:07.739152  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:07.784369  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:07.784406  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:07.810544  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:07.810571  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:10.388063  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:10.398699  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:10.398769  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:10.429540  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:10.429607  319301 cri.go:96] found id: ""
	I1227 20:14:10.429631  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:10.429721  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:10.433534  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:10.433651  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:10.459275  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:10.459297  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:10.459303  319301 cri.go:96] found id: ""
	I1227 20:14:10.459310  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:10.459366  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:10.463124  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:10.466705  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:10.466798  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:10.492126  319301 cri.go:96] found id: ""
	I1227 20:14:10.492155  319301 logs.go:282] 0 containers: []
	W1227 20:14:10.492173  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:10.492184  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:10.492242  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:10.518226  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:10.518248  319301 cri.go:96] found id: ""
	I1227 20:14:10.518256  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:10.518364  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:10.522989  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:10.523096  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:10.549695  319301 cri.go:96] found id: ""
	I1227 20:14:10.549722  319301 logs.go:282] 0 containers: []
	W1227 20:14:10.549732  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:10.549738  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:10.549798  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:10.579366  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:10.579390  319301 cri.go:96] found id: ""
	I1227 20:14:10.579398  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:10.579455  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:10.583638  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:10.583714  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:10.615082  319301 cri.go:96] found id: ""
	I1227 20:14:10.615105  319301 logs.go:282] 0 containers: []
	W1227 20:14:10.615113  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:10.615130  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:10.615142  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:10.683394  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:10.674472   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.675801   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.676387   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.678136   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.678634   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:10.674472   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.675801   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.676387   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.678136   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.678634   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:10.683412  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:10.683425  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:10.727898  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:10.727931  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:10.753009  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:10.753042  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:10.782677  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:10.782703  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:10.866110  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:10.866147  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:10.959413  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:10.959452  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:10.977909  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:10.977941  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:11.005943  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:11.005969  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:11.074309  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:11.074346  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:13.614417  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:13.625578  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:13.625646  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:13.652507  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:13.652525  319301 cri.go:96] found id: ""
	I1227 20:14:13.652534  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:13.652588  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:13.656545  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:13.656609  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:13.683073  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:13.683097  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:13.683102  319301 cri.go:96] found id: ""
	I1227 20:14:13.683110  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:13.683166  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:13.686968  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:13.690405  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:13.690466  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:13.717840  319301 cri.go:96] found id: ""
	I1227 20:14:13.717864  319301 logs.go:282] 0 containers: []
	W1227 20:14:13.717873  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:13.717879  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:13.717938  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:13.746028  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:13.746049  319301 cri.go:96] found id: ""
	I1227 20:14:13.746058  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:13.746117  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:13.749660  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:13.749741  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:13.775234  319301 cri.go:96] found id: ""
	I1227 20:14:13.775301  319301 logs.go:282] 0 containers: []
	W1227 20:14:13.775322  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:13.775330  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:13.775388  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:13.800618  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:13.800642  319301 cri.go:96] found id: ""
	I1227 20:14:13.800650  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:13.800708  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:13.804545  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:13.804619  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:13.832761  319301 cri.go:96] found id: ""
	I1227 20:14:13.832786  319301 logs.go:282] 0 containers: []
	W1227 20:14:13.832795  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:13.832811  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:13.832824  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:13.851133  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:13.851163  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:13.926603  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:13.926681  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:13.961517  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:13.961544  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:14.069694  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:14.069739  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:14.151483  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:14.142577   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.143391   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.145037   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.145551   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.147508   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:14.142577   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.143391   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.145037   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.145551   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.147508   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:14.151505  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:14.151520  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:14.181727  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:14.181758  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:14.240301  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:14.240339  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:14.300709  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:14.300743  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:14.336466  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:14.336498  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:16.865634  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:16.876358  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:16.876432  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:16.904188  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:16.904253  319301 cri.go:96] found id: ""
	I1227 20:14:16.904276  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:16.904367  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:16.908220  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:16.908322  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:16.937896  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:16.937919  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:16.937924  319301 cri.go:96] found id: ""
	I1227 20:14:16.937932  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:16.937986  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:16.942670  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:16.946301  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:16.946387  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:16.985586  319301 cri.go:96] found id: ""
	I1227 20:14:16.985609  319301 logs.go:282] 0 containers: []
	W1227 20:14:16.985618  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:16.985624  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:16.985683  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:17.013996  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:17.014029  319301 cri.go:96] found id: ""
	I1227 20:14:17.014039  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:17.014137  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:17.018935  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:17.019008  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:17.052484  319301 cri.go:96] found id: ""
	I1227 20:14:17.052561  319301 logs.go:282] 0 containers: []
	W1227 20:14:17.052583  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:17.052604  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:17.052695  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:17.081622  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:17.081695  319301 cri.go:96] found id: ""
	I1227 20:14:17.081718  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:17.081788  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:17.085690  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:17.085794  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:17.112049  319301 cri.go:96] found id: ""
	I1227 20:14:17.112074  319301 logs.go:282] 0 containers: []
	W1227 20:14:17.112082  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:17.112098  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:17.112141  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:17.137714  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:17.137743  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:17.213490  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:17.213533  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:17.246326  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:17.246356  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:17.328320  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:17.320845   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.321569   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.322897   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.323352   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.324795   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:17.320845   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.321569   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.322897   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.323352   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.324795   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:17.328340  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:17.328353  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:17.385541  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:17.385578  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:17.427419  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:17.427449  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:17.452174  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:17.452206  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:17.546685  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:17.546724  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:17.565295  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:17.565332  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:20.098978  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:20.111051  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:20.111126  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:20.137851  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:20.137927  319301 cri.go:96] found id: ""
	I1227 20:14:20.137963  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:20.138055  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:20.142900  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:20.143001  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:20.170010  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:20.170087  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:20.170109  319301 cri.go:96] found id: ""
	I1227 20:14:20.170137  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:20.170221  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:20.175063  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:20.178747  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:20.178824  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:20.206381  319301 cri.go:96] found id: ""
	I1227 20:14:20.206409  319301 logs.go:282] 0 containers: []
	W1227 20:14:20.206418  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:20.206425  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:20.206485  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:20.233473  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:20.233499  319301 cri.go:96] found id: ""
	I1227 20:14:20.233508  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:20.233571  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:20.237997  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:20.238070  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:20.262995  319301 cri.go:96] found id: ""
	I1227 20:14:20.263067  319301 logs.go:282] 0 containers: []
	W1227 20:14:20.263092  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:20.263099  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:20.263170  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:20.288462  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:20.288537  319301 cri.go:96] found id: ""
	I1227 20:14:20.288566  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:20.288647  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:20.292436  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:20.292550  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:20.322573  319301 cri.go:96] found id: ""
	I1227 20:14:20.322596  319301 logs.go:282] 0 containers: []
	W1227 20:14:20.322605  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:20.322621  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:20.322633  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:20.432211  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:20.432245  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:20.496754  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:20.496791  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:20.540278  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:20.540351  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:20.567122  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:20.567152  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:20.648855  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:20.648895  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:20.667153  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:20.667185  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:20.736076  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:20.727815   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.728362   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.730119   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.730829   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.732497   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:20.727815   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.728362   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.730119   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.730829   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.732497   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:20.736098  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:20.736112  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:20.762277  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:20.762304  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:20.800871  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:20.800901  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:23.331772  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:23.342153  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:23.342227  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:23.367402  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:23.367424  319301 cri.go:96] found id: ""
	I1227 20:14:23.367433  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:23.367489  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:23.371067  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:23.371137  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:23.397005  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:23.397081  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:23.397101  319301 cri.go:96] found id: ""
	I1227 20:14:23.397127  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:23.397212  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:23.401002  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:23.404386  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:23.404490  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:23.430285  319301 cri.go:96] found id: ""
	I1227 20:14:23.430309  319301 logs.go:282] 0 containers: []
	W1227 20:14:23.430318  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:23.430326  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:23.430383  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:23.461494  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:23.461517  319301 cri.go:96] found id: ""
	I1227 20:14:23.461526  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:23.461578  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:23.465337  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:23.465409  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:23.496783  319301 cri.go:96] found id: ""
	I1227 20:14:23.496808  319301 logs.go:282] 0 containers: []
	W1227 20:14:23.496818  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:23.496824  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:23.496881  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:23.522580  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:23.522602  319301 cri.go:96] found id: ""
	I1227 20:14:23.522610  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:23.522665  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:23.526436  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:23.526519  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:23.557267  319301 cri.go:96] found id: ""
	I1227 20:14:23.557299  319301 logs.go:282] 0 containers: []
	W1227 20:14:23.557309  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:23.557325  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:23.557336  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:23.584981  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:23.585010  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:23.648213  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:23.648252  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:23.695771  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:23.695847  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:23.726135  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:23.726165  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:23.810400  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:23.810440  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:23.916410  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:23.916451  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:23.945753  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:23.945825  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:23.996874  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:23.996903  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:24.015806  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:24.015853  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:24.093634  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:24.083702   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.084655   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.086499   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.086863   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.088426   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:24.083702   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.084655   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.086499   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.086863   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.088426   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:26.595192  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:26.607312  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:26.607388  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:26.644526  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:26.644546  319301 cri.go:96] found id: ""
	I1227 20:14:26.644554  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:26.644613  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:26.648515  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:26.648588  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:26.674360  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:26.674383  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:26.674387  319301 cri.go:96] found id: ""
	I1227 20:14:26.674395  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:26.674451  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:26.678114  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:26.681548  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:26.681619  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:26.707823  319301 cri.go:96] found id: ""
	I1227 20:14:26.707847  319301 logs.go:282] 0 containers: []
	W1227 20:14:26.707856  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:26.707863  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:26.707918  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:26.736808  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:26.736830  319301 cri.go:96] found id: ""
	I1227 20:14:26.736839  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:26.736910  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:26.740449  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:26.740516  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:26.767979  319301 cri.go:96] found id: ""
	I1227 20:14:26.768005  319301 logs.go:282] 0 containers: []
	W1227 20:14:26.768014  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:26.768020  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:26.768093  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:26.794399  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:26.794419  319301 cri.go:96] found id: ""
	I1227 20:14:26.794428  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:26.794482  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:26.798158  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:26.798242  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:26.822859  319301 cri.go:96] found id: ""
	I1227 20:14:26.822883  319301 logs.go:282] 0 containers: []
	W1227 20:14:26.822893  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:26.822924  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:26.822946  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:26.868214  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:26.868238  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:26.932994  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:26.933029  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:26.977303  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:26.977340  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:27.068000  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:27.068040  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:27.171536  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:27.171574  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:27.190535  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:27.190562  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:27.216736  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:27.216762  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:27.243411  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:27.243439  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:27.295099  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:27.295126  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:27.357878  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:27.350559   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.350955   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.352482   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.352824   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.354320   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:27.350559   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.350955   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.352482   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.352824   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.354320   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:29.858681  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:29.868776  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:29.868844  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:29.896575  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:29.896597  319301 cri.go:96] found id: ""
	I1227 20:14:29.896605  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:29.896686  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:29.900141  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:29.900230  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:29.933885  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:29.933909  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:29.933915  319301 cri.go:96] found id: ""
	I1227 20:14:29.933922  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:29.933995  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:29.937419  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:29.940597  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:29.940661  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:29.985795  319301 cri.go:96] found id: ""
	I1227 20:14:29.985826  319301 logs.go:282] 0 containers: []
	W1227 20:14:29.985836  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:29.985843  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:29.985919  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:30.025679  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:30.025700  319301 cri.go:96] found id: ""
	I1227 20:14:30.025709  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:30.025777  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:30.049697  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:30.049787  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:30.082890  319301 cri.go:96] found id: ""
	I1227 20:14:30.082916  319301 logs.go:282] 0 containers: []
	W1227 20:14:30.082926  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:30.082934  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:30.083006  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:30.119124  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:30.119148  319301 cri.go:96] found id: ""
	I1227 20:14:30.119156  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:30.119217  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:30.123169  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:30.123244  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:30.151766  319301 cri.go:96] found id: ""
	I1227 20:14:30.151790  319301 logs.go:282] 0 containers: []
	W1227 20:14:30.151799  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:30.151816  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:30.151828  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:30.169326  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:30.169357  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:30.199380  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:30.199412  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:30.265121  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:30.265163  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:30.356459  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:30.356498  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:30.392984  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:30.393013  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:30.499474  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:30.499511  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:30.571342  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:30.561186   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.563435   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.564195   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.566014   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.566655   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:30.561186   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.563435   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.564195   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.566014   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.566655   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:30.571365  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:30.571378  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:30.615172  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:30.615207  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:30.644774  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:30.644803  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:33.172504  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:33.183855  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:33.183927  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:33.214210  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:33.214232  319301 cri.go:96] found id: ""
	I1227 20:14:33.214241  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:33.214307  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:33.218161  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:33.218245  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:33.244477  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:33.244501  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:33.244506  319301 cri.go:96] found id: ""
	I1227 20:14:33.244513  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:33.244574  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:33.248725  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:33.252096  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:33.252166  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:33.284273  319301 cri.go:96] found id: ""
	I1227 20:14:33.284304  319301 logs.go:282] 0 containers: []
	W1227 20:14:33.284317  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:33.284327  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:33.284406  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:33.311094  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:33.311117  319301 cri.go:96] found id: ""
	I1227 20:14:33.311125  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:33.311184  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:33.315375  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:33.315450  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:33.344846  319301 cri.go:96] found id: ""
	I1227 20:14:33.344870  319301 logs.go:282] 0 containers: []
	W1227 20:14:33.344879  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:33.344886  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:33.344945  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:33.370949  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:33.371011  319301 cri.go:96] found id: ""
	I1227 20:14:33.371033  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:33.371093  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:33.375136  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:33.375211  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:33.403339  319301 cri.go:96] found id: ""
	I1227 20:14:33.403361  319301 logs.go:282] 0 containers: []
	W1227 20:14:33.403370  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:33.403385  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:33.403396  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:33.484170  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:33.484207  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:33.516735  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:33.516766  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:33.534421  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:33.534452  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:33.613759  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:33.613800  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:33.651422  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:33.651450  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:33.759905  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:33.759949  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:33.827184  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:33.819142   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.819867   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.821423   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.822059   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.823552   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:33.819142   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.819867   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.821423   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.822059   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.823552   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:33.827217  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:33.827232  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:33.858891  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:33.858926  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:33.904092  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:33.904128  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:36.431294  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:36.449106  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:36.449178  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:36.480392  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:36.480416  319301 cri.go:96] found id: ""
	I1227 20:14:36.480425  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:36.480481  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:36.485341  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:36.485424  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:36.515111  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:36.515185  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:36.515199  319301 cri.go:96] found id: ""
	I1227 20:14:36.515225  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:36.515283  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:36.519737  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:36.523801  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:36.523877  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:36.550603  319301 cri.go:96] found id: ""
	I1227 20:14:36.550628  319301 logs.go:282] 0 containers: []
	W1227 20:14:36.550637  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:36.550644  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:36.550699  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:36.586466  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:36.586492  319301 cri.go:96] found id: ""
	I1227 20:14:36.586500  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:36.586577  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:36.590067  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:36.590139  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:36.621202  319301 cri.go:96] found id: ""
	I1227 20:14:36.621235  319301 logs.go:282] 0 containers: []
	W1227 20:14:36.621244  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:36.621250  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:36.621308  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:36.647269  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:36.647292  319301 cri.go:96] found id: ""
	I1227 20:14:36.647301  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:36.647379  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:36.651085  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:36.651160  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:36.677749  319301 cri.go:96] found id: ""
	I1227 20:14:36.677778  319301 logs.go:282] 0 containers: []
	W1227 20:14:36.677788  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:36.677804  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:36.677817  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:36.725080  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:36.725110  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:36.755181  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:36.755211  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:36.784468  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:36.784496  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:36.816908  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:36.816940  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:36.834015  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:36.834047  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:36.900869  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:36.892648   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.893851   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.894994   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.895421   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.896907   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:36.892648   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.893851   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.894994   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.895421   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.896907   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:36.900892  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:36.900908  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:36.960391  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:36.960427  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:37.045275  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:37.045325  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:37.148150  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:37.148188  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:39.676095  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:39.686901  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:39.686981  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:39.713632  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:39.713662  319301 cri.go:96] found id: ""
	I1227 20:14:39.713681  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:39.713758  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:39.717685  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:39.717762  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:39.744240  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:39.744264  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:39.744269  319301 cri.go:96] found id: ""
	I1227 20:14:39.744277  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:39.744330  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:39.748168  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:39.751671  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:39.751770  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:39.781268  319301 cri.go:96] found id: ""
	I1227 20:14:39.781293  319301 logs.go:282] 0 containers: []
	W1227 20:14:39.781302  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:39.781309  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:39.781401  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:39.810785  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:39.810807  319301 cri.go:96] found id: ""
	I1227 20:14:39.810815  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:39.810888  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:39.814715  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:39.814784  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:39.841437  319301 cri.go:96] found id: ""
	I1227 20:14:39.841493  319301 logs.go:282] 0 containers: []
	W1227 20:14:39.841503  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:39.841508  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:39.841573  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:39.868907  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:39.868925  319301 cri.go:96] found id: ""
	I1227 20:14:39.868933  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:39.868987  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:39.872674  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:39.872744  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:39.900867  319301 cri.go:96] found id: ""
	I1227 20:14:39.900943  319301 logs.go:282] 0 containers: []
	W1227 20:14:39.900966  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:39.901013  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:39.901043  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:39.918593  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:39.918625  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:39.949056  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:39.949087  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:39.981788  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:39.981818  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:40.105238  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:40.105377  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:40.191666  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:40.183905   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.184449   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.186006   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.186447   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.187950   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:40.183905   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.184449   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.186006   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.186447   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.187950   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:40.191684  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:40.191701  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:40.262140  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:40.262180  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:40.310808  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:40.310845  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:40.337783  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:40.337811  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:40.368704  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:40.368733  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:42.951291  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:42.961621  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:42.961714  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:42.996358  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:42.996382  319301 cri.go:96] found id: ""
	I1227 20:14:42.996391  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:42.996476  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:43.000167  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:43.000258  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:43.042517  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:43.042542  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:43.042547  319301 cri.go:96] found id: ""
	I1227 20:14:43.042555  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:43.042636  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:43.046498  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:43.049992  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:43.050069  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:43.076653  319301 cri.go:96] found id: ""
	I1227 20:14:43.076681  319301 logs.go:282] 0 containers: []
	W1227 20:14:43.076690  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:43.076697  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:43.076755  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:43.104355  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:43.104379  319301 cri.go:96] found id: ""
	I1227 20:14:43.104388  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:43.104444  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:43.108064  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:43.108137  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:43.136746  319301 cri.go:96] found id: ""
	I1227 20:14:43.136771  319301 logs.go:282] 0 containers: []
	W1227 20:14:43.136780  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:43.136786  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:43.136856  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:43.167333  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:43.167354  319301 cri.go:96] found id: ""
	I1227 20:14:43.167362  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:43.167417  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:43.171054  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:43.171167  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:43.196510  319301 cri.go:96] found id: ""
	I1227 20:14:43.196539  319301 logs.go:282] 0 containers: []
	W1227 20:14:43.196548  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:43.196562  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:43.196573  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:43.246188  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:43.246222  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:43.280060  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:43.280088  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:43.364679  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:43.364718  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:43.383405  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:43.383434  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:43.412457  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:43.412484  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:43.441225  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:43.441251  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:43.483277  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:43.483305  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:43.587381  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:43.587418  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:43.657966  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:43.648616   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.649341   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.651243   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.652029   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.653574   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:43.648616   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.649341   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.651243   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.652029   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.653574   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:43.657996  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:43.658011  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:46.217780  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:46.229546  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:46.229622  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:46.255054  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:46.255074  319301 cri.go:96] found id: ""
	I1227 20:14:46.255082  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:46.255135  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:46.258848  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:46.258946  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:46.292684  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:46.292758  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:46.292778  319301 cri.go:96] found id: ""
	I1227 20:14:46.292803  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:46.292889  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:46.296621  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:46.300035  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:46.300104  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:46.325669  319301 cri.go:96] found id: ""
	I1227 20:14:46.325694  319301 logs.go:282] 0 containers: []
	W1227 20:14:46.325703  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:46.325709  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:46.325766  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:46.352094  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:46.352159  319301 cri.go:96] found id: ""
	I1227 20:14:46.352182  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:46.352268  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:46.355963  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:46.356077  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:46.381620  319301 cri.go:96] found id: ""
	I1227 20:14:46.381646  319301 logs.go:282] 0 containers: []
	W1227 20:14:46.381656  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:46.381662  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:46.381738  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:46.410104  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:46.410127  319301 cri.go:96] found id: ""
	I1227 20:14:46.410135  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:46.410191  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:46.413648  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:46.413715  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:46.440709  319301 cri.go:96] found id: ""
	I1227 20:14:46.440734  319301 logs.go:282] 0 containers: []
	W1227 20:14:46.440745  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:46.440759  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:46.440781  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:46.469916  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:46.469945  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:46.571819  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:46.571854  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:46.590503  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:46.590531  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:46.624094  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:46.624120  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:46.655415  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:46.655444  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:46.727967  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:46.719794   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.720498   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.722193   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.722714   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.724244   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:46.719794   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.720498   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.722193   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.722714   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.724244   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:46.727989  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:46.728003  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:46.787862  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:46.787899  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:46.848761  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:46.848797  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:46.883658  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:46.883687  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:49.466063  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:49.476365  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:49.476460  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:49.502643  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:49.502665  319301 cri.go:96] found id: ""
	I1227 20:14:49.502673  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:49.502727  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:49.506369  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:49.506443  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:49.532399  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:49.532421  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:49.532427  319301 cri.go:96] found id: ""
	I1227 20:14:49.532435  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:49.532488  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:49.536133  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:49.539580  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:49.539645  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:49.566501  319301 cri.go:96] found id: ""
	I1227 20:14:49.566528  319301 logs.go:282] 0 containers: []
	W1227 20:14:49.566537  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:49.566544  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:49.566605  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:49.602221  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:49.602245  319301 cri.go:96] found id: ""
	I1227 20:14:49.602254  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:49.602316  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:49.606305  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:49.606375  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:49.632906  319301 cri.go:96] found id: ""
	I1227 20:14:49.632931  319301 logs.go:282] 0 containers: []
	W1227 20:14:49.632941  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:49.632946  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:49.633012  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:49.660593  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:49.660616  319301 cri.go:96] found id: ""
	I1227 20:14:49.660625  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:49.660683  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:49.664343  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:49.664414  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:49.691030  319301 cri.go:96] found id: ""
	I1227 20:14:49.691093  319301 logs.go:282] 0 containers: []
	W1227 20:14:49.691110  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:49.691125  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:49.691137  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:49.786516  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:49.786552  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:49.837581  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:49.837615  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:49.923089  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:49.923126  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:49.964776  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:49.964806  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:49.984138  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:49.984166  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:50.053988  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:50.045799   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.046531   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.048064   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.048564   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.050125   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:50.045799   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.046531   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.048064   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.048564   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.050125   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:50.054052  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:50.054072  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:50.080753  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:50.080847  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:50.160335  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:50.160373  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:50.189801  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:50.189831  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:52.722382  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:52.732860  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:52.732954  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:52.759105  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:52.759129  319301 cri.go:96] found id: ""
	I1227 20:14:52.759140  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:52.759192  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:52.763086  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:52.763152  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:52.789342  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:52.789365  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:52.789370  319301 cri.go:96] found id: ""
	I1227 20:14:52.789378  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:52.789441  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:52.793045  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:52.796599  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:52.796677  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:52.821951  319301 cri.go:96] found id: ""
	I1227 20:14:52.821975  319301 logs.go:282] 0 containers: []
	W1227 20:14:52.821984  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:52.821990  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:52.822048  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:52.848207  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:52.848227  319301 cri.go:96] found id: ""
	I1227 20:14:52.848235  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:52.848290  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:52.852016  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:52.852114  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:52.878718  319301 cri.go:96] found id: ""
	I1227 20:14:52.878752  319301 logs.go:282] 0 containers: []
	W1227 20:14:52.878761  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:52.878768  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:52.878826  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:52.905928  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:52.906001  319301 cri.go:96] found id: ""
	I1227 20:14:52.906023  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:52.906113  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:52.910178  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:52.910250  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:52.937172  319301 cri.go:96] found id: ""
	I1227 20:14:52.937209  319301 logs.go:282] 0 containers: []
	W1227 20:14:52.937218  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:52.937231  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:52.937249  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:52.966131  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:52.966162  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:53.003464  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:53.003490  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:53.021719  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:53.021777  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:53.091033  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:53.081906   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.083382   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.084066   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.085728   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.086021   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:53.081906   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.083382   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.084066   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.085728   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.086021   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:53.091054  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:53.091067  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:53.153878  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:53.153918  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:53.184615  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:53.184643  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:53.268968  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:53.269005  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:53.374253  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:53.374287  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:53.403008  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:53.403044  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:55.952353  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:55.962631  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:55.962719  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:55.995078  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:55.995100  319301 cri.go:96] found id: ""
	I1227 20:14:55.995108  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:55.995174  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:55.999787  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:55.999857  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:56.034785  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:56.034809  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:56.034814  319301 cri.go:96] found id: ""
	I1227 20:14:56.034821  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:56.034886  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:56.039026  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:56.043109  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:56.043239  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:56.076322  319301 cri.go:96] found id: ""
	I1227 20:14:56.076349  319301 logs.go:282] 0 containers: []
	W1227 20:14:56.076358  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:56.076365  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:56.076450  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:56.105910  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:56.105937  319301 cri.go:96] found id: ""
	I1227 20:14:56.105945  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:56.106024  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:56.109833  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:56.109951  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:56.136658  319301 cri.go:96] found id: ""
	I1227 20:14:56.136681  319301 logs.go:282] 0 containers: []
	W1227 20:14:56.136690  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:56.136696  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:56.136751  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:56.162379  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:56.162402  319301 cri.go:96] found id: ""
	I1227 20:14:56.162409  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:56.162464  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:56.165959  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:56.166030  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:56.193023  319301 cri.go:96] found id: ""
	I1227 20:14:56.193057  319301 logs.go:282] 0 containers: []
	W1227 20:14:56.193066  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:56.193097  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:56.193131  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:56.219549  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:56.219577  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:56.255190  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:56.255218  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:56.326655  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:56.326690  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:56.369967  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:56.370002  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:56.449778  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:56.449815  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:56.481804  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:56.481833  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:56.580473  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:56.580507  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:56.597748  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:56.597781  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:56.675164  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:56.667282   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.668004   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.669569   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.670031   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.671487   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:56.667282   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.668004   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.669569   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.670031   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.671487   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:56.675187  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:56.675210  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:59.204907  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:59.215384  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:59.215464  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:59.241010  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:59.241041  319301 cri.go:96] found id: ""
	I1227 20:14:59.241056  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:59.241157  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:59.245340  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:59.245433  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:59.282857  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:59.282880  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:59.282886  319301 cri.go:96] found id: ""
	I1227 20:14:59.282893  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:59.282945  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:59.286535  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:59.289810  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:59.289879  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:59.317473  319301 cri.go:96] found id: ""
	I1227 20:14:59.317509  319301 logs.go:282] 0 containers: []
	W1227 20:14:59.317517  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:59.317524  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:59.317593  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:59.350932  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:59.350952  319301 cri.go:96] found id: ""
	I1227 20:14:59.350961  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:59.351015  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:59.354698  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:59.354768  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:59.381626  319301 cri.go:96] found id: ""
	I1227 20:14:59.381660  319301 logs.go:282] 0 containers: []
	W1227 20:14:59.381669  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:59.381675  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:59.381730  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:59.408107  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:59.408130  319301 cri.go:96] found id: ""
	I1227 20:14:59.408140  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:59.408216  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:59.411771  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:59.411846  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:59.436633  319301 cri.go:96] found id: ""
	I1227 20:14:59.436660  319301 logs.go:282] 0 containers: []
	W1227 20:14:59.436669  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:59.436683  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:59.436695  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:59.532932  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:59.532968  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:59.601543  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:59.593318   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.594069   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.595883   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.596441   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.597498   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:59.593318   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.594069   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.595883   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.596441   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.597498   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:59.601573  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:59.601587  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:59.630627  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:59.630653  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:59.691462  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:59.691537  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:59.736271  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:59.736311  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:59.763317  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:59.763349  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:59.845478  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:59.845512  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:59.877233  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:59.877259  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:59.894077  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:59.894108  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:02.425928  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:15:02.437025  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:15:02.437097  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:15:02.462847  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:02.462876  319301 cri.go:96] found id: ""
	I1227 20:15:02.462885  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:15:02.462941  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:02.466840  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:15:02.466915  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:15:02.493867  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:02.493889  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:02.493895  319301 cri.go:96] found id: ""
	I1227 20:15:02.493903  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:15:02.493986  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:02.497849  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:02.501391  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:15:02.501500  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:15:02.531735  319301 cri.go:96] found id: ""
	I1227 20:15:02.531761  319301 logs.go:282] 0 containers: []
	W1227 20:15:02.531771  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:15:02.531779  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:15:02.531858  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:15:02.557699  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:02.557723  319301 cri.go:96] found id: ""
	I1227 20:15:02.557732  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:15:02.557792  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:02.561785  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:15:02.561860  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:15:02.588584  319301 cri.go:96] found id: ""
	I1227 20:15:02.588611  319301 logs.go:282] 0 containers: []
	W1227 20:15:02.588620  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:15:02.588665  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:15:02.588727  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:15:02.626246  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:02.626270  319301 cri.go:96] found id: ""
	I1227 20:15:02.626279  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:15:02.626332  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:02.630342  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:15:02.630416  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:15:02.658875  319301 cri.go:96] found id: ""
	I1227 20:15:02.658899  319301 logs.go:282] 0 containers: []
	W1227 20:15:02.658908  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:15:02.658940  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:15:02.658959  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:15:02.760567  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:15:02.760609  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:15:02.779705  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:15:02.779737  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:15:02.864780  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:15:02.844552   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.845307   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.847070   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.847814   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.850808   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:15:02.844552   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.845307   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.847070   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.847814   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.850808   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:15:02.864807  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:15:02.864822  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:02.930564  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:15:02.930600  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:02.956647  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:15:02.956674  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:02.988569  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:15:02.988644  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:15:03.080368  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:15:03.080404  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:03.109214  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:15:03.109254  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:03.154097  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:15:03.154130  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:15:05.702871  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:15:05.713737  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:15:05.713808  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:15:05.747061  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:05.747087  319301 cri.go:96] found id: ""
	I1227 20:15:05.747097  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:15:05.747152  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:05.751069  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:15:05.751142  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:15:05.778241  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:05.778264  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:05.778269  319301 cri.go:96] found id: ""
	I1227 20:15:05.778276  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:15:05.778330  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:05.781970  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:05.785615  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:15:05.785684  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:15:05.811372  319301 cri.go:96] found id: ""
	I1227 20:15:05.811405  319301 logs.go:282] 0 containers: []
	W1227 20:15:05.811419  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:15:05.811426  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:15:05.811487  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:15:05.837308  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:05.837331  319301 cri.go:96] found id: ""
	I1227 20:15:05.837339  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:15:05.837394  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:05.841435  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:15:05.841563  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:15:05.872145  319301 cri.go:96] found id: ""
	I1227 20:15:05.872175  319301 logs.go:282] 0 containers: []
	W1227 20:15:05.872184  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:15:05.872191  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:15:05.872248  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:15:05.905843  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:05.905863  319301 cri.go:96] found id: ""
	I1227 20:15:05.905872  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:15:05.905928  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:05.909362  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:15:05.909433  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:15:05.937743  319301 cri.go:96] found id: ""
	I1227 20:15:05.937768  319301 logs.go:282] 0 containers: []
	W1227 20:15:05.937776  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:15:05.937789  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:15:05.937805  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:15:05.956337  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:15:05.956373  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:06.027819  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:15:06.027857  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:06.055387  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:15:06.055417  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:15:06.087848  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:15:06.087876  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:15:06.191189  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:15:06.191225  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:15:06.260486  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:15:06.252420   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.253150   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.254651   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.255097   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.256545   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:15:06.252420   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.253150   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.254651   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.255097   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.256545   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:15:06.260512  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:15:06.260527  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:06.289045  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:15:06.289074  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:06.340456  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:15:06.340493  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:06.367177  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:15:06.367209  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:15:08.948368  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:15:08.960093  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:15:08.960163  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:15:09.004464  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:09.004531  319301 cri.go:96] found id: ""
	I1227 20:15:09.004541  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:15:09.004627  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.008790  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:15:09.008905  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:15:09.041635  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:09.041705  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:09.041727  319301 cri.go:96] found id: ""
	I1227 20:15:09.041750  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:15:09.041834  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.046563  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.050558  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:15:09.050679  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:15:09.079147  319301 cri.go:96] found id: ""
	I1227 20:15:09.079218  319301 logs.go:282] 0 containers: []
	W1227 20:15:09.079241  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:15:09.079265  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:15:09.079350  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:15:09.115659  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:09.115728  319301 cri.go:96] found id: ""
	I1227 20:15:09.115749  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:15:09.115833  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.119927  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:15:09.120060  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:15:09.148832  319301 cri.go:96] found id: ""
	I1227 20:15:09.148905  319301 logs.go:282] 0 containers: []
	W1227 20:15:09.148927  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:15:09.148951  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:15:09.149036  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:15:09.193967  319301 cri.go:96] found id: "d4599a49838601138827173ae16d1700bf9c506a4f9611f8f2415da1ea387070"
	I1227 20:15:09.194039  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:09.194058  319301 cri.go:96] found id: ""
	I1227 20:15:09.194083  319301 logs.go:282] 2 containers: [d4599a49838601138827173ae16d1700bf9c506a4f9611f8f2415da1ea387070 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:15:09.194168  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.198186  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.202291  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:15:09.202369  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:15:09.233220  319301 cri.go:96] found id: ""
	I1227 20:15:09.233256  319301 logs.go:282] 0 containers: []
	W1227 20:15:09.233266  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:15:09.233275  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:15:09.233286  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:15:09.265208  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:15:09.265236  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:15:09.366491  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:15:09.366527  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:15:09.385049  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:15:09.385152  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:09.416669  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:15:09.416697  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:09.477821  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:15:09.477862  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:09.503656  319301 logs.go:123] Gathering logs for kube-controller-manager [d4599a49838601138827173ae16d1700bf9c506a4f9611f8f2415da1ea387070] ...
	I1227 20:15:09.503682  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4599a49838601138827173ae16d1700bf9c506a4f9611f8f2415da1ea387070"
	I1227 20:15:09.529517  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:15:09.529549  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:15:09.594024  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:15:09.583997   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.584731   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.586847   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.587585   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.589403   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:15:09.583997   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.584731   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.586847   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.587585   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.589403   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:15:09.594044  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:15:09.594113  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:09.641021  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:15:09.641054  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:09.671469  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:15:09.671497  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:15:12.247384  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:15:12.261411  319301 out.go:203] 
	W1227 20:15:12.264240  319301 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1227 20:15:12.264279  319301 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1227 20:15:12.264291  319301 out.go:285] * Related issues:
	W1227 20:15:12.264307  319301 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1227 20:15:12.264322  319301 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1227 20:15:12.272645  319301 out.go:203] 
	
	
	==> CRI-O <==
	Dec 27 20:09:47 ha-422549 crio[668]: time="2025-12-27T20:09:47.963961573Z" level=info msg="Started container" PID=1443 containerID=810850466f08e002011f0d991e32eb0109be47db69714d6e333a070593589ffc description=kube-system/kube-controller-manager-ha-422549/kube-controller-manager id=4c2fe289-ef21-4410-b80d-903288016926 name=/runtime.v1.RuntimeService/StartContainer sandboxID=38efda04ee9aef0e7908e0db5c261b87e7e5100a62c84932b9b7ba0d61a4d0b2
	Dec 27 20:09:49 ha-422549 conmon[1210]: conmon b67722550482449b8daa <ninfo>: container 1212 exited with status 1
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.376459079Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=69065085-21ea-41c3-802a-261d89524c56 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.377242719Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1df6dc90-5ba0-4b74-852c-4cf7aefb23f0 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.378198249Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=cee7eb55-89b4-4b4e-840f-5adab55395f1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.378318031Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.390342199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.390574781Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d51da34059b2d7dc5c5989964247fd01aabd5fa31dd489fcbed003c93c5d0a79/merged/etc/passwd: no such file or directory"
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.390683445Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d51da34059b2d7dc5c5989964247fd01aabd5fa31dd489fcbed003c93c5d0a79/merged/etc/group: no such file or directory"
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.391133051Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.407049484Z" level=info msg="Created container 39052e86fac88d7cd6484a6d581397a09660e8626a668440758c42943ffc493c: kube-system/storage-provisioner/storage-provisioner" id=cee7eb55-89b4-4b4e-840f-5adab55395f1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.408066239Z" level=info msg="Starting container: 39052e86fac88d7cd6484a6d581397a09660e8626a668440758c42943ffc493c" id=a1f177fc-11ea-4dd9-a25c-b20aa52a0229 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.409701176Z" level=info msg="Started container" PID=1456 containerID=39052e86fac88d7cd6484a6d581397a09660e8626a668440758c42943ffc493c description=kube-system/storage-provisioner/storage-provisioner id=a1f177fc-11ea-4dd9-a25c-b20aa52a0229 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0df0f45f11cf21c22800d785af6947dd7131cfe5dea11e9e2d6c844bc352c0a
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.443600032Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.447069767Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.447101142Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.44712181Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.451793967Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.451824431Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.451847585Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.455975682Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.456009075Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.456031754Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.458926316Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.45895939Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	39052e86fac88       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Running             storage-provisioner       2                   c0df0f45f11cf       storage-provisioner                 kube-system
	810850466f08e       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   5 minutes ago       Running             kube-controller-manager   5                   38efda04ee9ae       kube-controller-manager-ha-422549   kube-system
	deb6daab23cec       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf   5 minutes ago       Running             coredns                   1                   72c204b703743       coredns-7d764666f9-n5d9d            kube-system
	43a1d9657d3c8       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf   5 minutes ago       Running             coredns                   1                   270010189bb39       coredns-7d764666f9-mf5xw            kube-system
	b677225504824       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Exited              storage-provisioner       1                   c0df0f45f11cf       storage-provisioner                 kube-system
	10122e623612b       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   5 minutes ago       Running             busybox                   1                   b045d6d9411c4       busybox-769dd8b7dd-k7ks6            default
	790f2c013c89e       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   5 minutes ago       Running             kindnet-cni               1                   963cd2abb4546       kindnet-qkqmv                       kube-system
	0dc7fc3f72aac       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   5 minutes ago       Running             kube-proxy                1                   d7813942f329c       kube-proxy-mhmmn                    kube-system
	200f949dea5c6       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   6 minutes ago       Exited              kube-controller-manager   4                   38efda04ee9ae       kube-controller-manager-ha-422549   kube-system
	a2c772463ab69       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   6 minutes ago       Running             kube-apiserver            2                   8bfe137c6f9b3       kube-apiserver-ha-422549            kube-system
	c3f87ac29708d       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   7 minutes ago       Exited              kube-apiserver            1                   8bfe137c6f9b3       kube-apiserver-ha-422549            kube-system
	79f65bc2e1dbc       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   7 minutes ago       Running             etcd                      1                   f60298eb8266f       etcd-ha-422549                      kube-system
	dd811e752da4c       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   7 minutes ago       Running             kube-scheduler            1                   ce9729522201c       kube-scheduler-ha-422549            kube-system
	feeed30c26dbb       28c5662932f6032ee4faba083d9c2af90232797e1d4f89d9892cb92b26fec299   7 minutes ago       Running             kube-vip                  0                   1eca96f45960b       kube-vip-ha-422549                  kube-system
	
	
	==> coredns [43a1d9657d3c893603414e1fad6c7f34c4c4ed3f7f0f2369eb8490cc9ea240ec] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:47173 - 60767 "HINFO IN 8301766955164973522.8999772451794302158. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029591992s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> coredns [deb6daab23cece988ebd68d94f1237fabdfd9ad9729504264927da30e4c1b5a0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:35210 - 10149 "HINFO IN 5398190722329959175.7924831905691569149. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027114236s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-422549
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_03_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:03:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:15:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:11:30 +0000   Sat, 27 Dec 2025 20:03:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:11:30 +0000   Sat, 27 Dec 2025 20:03:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:11:30 +0000   Sat, 27 Dec 2025 20:03:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:11:30 +0000   Sat, 27 Dec 2025 20:09:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-422549
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                acd356f3-8732-454f-9ea5-4ebb90b80a04
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-k7ks6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7d764666f9-mf5xw             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 coredns-7d764666f9-n5d9d             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 etcd-ha-422549                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-qkqmv                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-422549             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-422549    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-mhmmn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-422549             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-422549                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  8m25s  node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  5m26s  node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	
	
	Name:               ha-422549-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_04_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:04:00 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:06:58 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 27 Dec 2025 20:06:47 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 27 Dec 2025 20:06:47 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 27 Dec 2025 20:06:47 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 27 Dec 2025 20:06:47 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-422549-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                279e934d-6d34-4a11-83f0-a7f36011d6a2
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-v6vks                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-422549-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-5wczs                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-422549-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-422549-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-nqr7h                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-422549-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-422549-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  8m25s  node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  5m26s  node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  NodeNotReady    4m36s  node-controller  Node ha-422549-m02 status is now: NodeNotReady
	
	
	Name:               ha-422549-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_04_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:04:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:06:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 27 Dec 2025 20:06:39 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 27 Dec 2025 20:06:39 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 27 Dec 2025 20:06:39 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 27 Dec 2025 20:06:39 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-422549-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                dd826b6d-21ec-45c4-b392-2d4b9b2daddb
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-qcz4b                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-422549-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-28svl                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-422549-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-422549-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-cg4z5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-422549-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-422549-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  8m25s  node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  5m26s  node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  NodeNotReady    4m36s  node-controller  Node ha-422549-m03 status is now: NodeNotReady
	
	
	Name:               ha-422549-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_05_33_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:05:32 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:06:44 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 27 Dec 2025 20:06:44 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 27 Dec 2025 20:06:44 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 27 Dec 2025 20:06:44 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 27 Dec 2025 20:06:44 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-422549-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                45c0e480-898e-46d5-83ce-c457d7b4b021
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4hl7v       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m43s
	  kube-system                 kube-proxy-kscg6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  9m41s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  9m41s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  9m39s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  8m25s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  5m26s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  NodeNotReady    4m36s  node-controller  Node ha-422549-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Dec27 19:24] overlayfs: idmapped layers are currently not supported
	[Dec27 19:25] overlayfs: idmapped layers are currently not supported
	[Dec27 19:26] overlayfs: idmapped layers are currently not supported
	[ +16.831724] overlayfs: idmapped layers are currently not supported
	[Dec27 19:27] overlayfs: idmapped layers are currently not supported
	[Dec27 19:28] overlayfs: idmapped layers are currently not supported
	[ +28.388596] overlayfs: idmapped layers are currently not supported
	[Dec27 19:29] overlayfs: idmapped layers are currently not supported
	[  +9.242530] overlayfs: idmapped layers are currently not supported
	[Dec27 19:30] overlayfs: idmapped layers are currently not supported
	[ +11.577339] overlayfs: idmapped layers are currently not supported
	[Dec27 19:32] overlayfs: idmapped layers are currently not supported
	[ +19.186532] overlayfs: idmapped layers are currently not supported
	[Dec27 19:34] overlayfs: idmapped layers are currently not supported
	[Dec27 19:54] kauditd_printk_skb: 8 callbacks suppressed
	[Dec27 19:56] overlayfs: idmapped layers are currently not supported
	[Dec27 19:59] overlayfs: idmapped layers are currently not supported
	[Dec27 20:00] overlayfs: idmapped layers are currently not supported
	[Dec27 20:03] overlayfs: idmapped layers are currently not supported
	[ +31.019083] overlayfs: idmapped layers are currently not supported
	[Dec27 20:04] overlayfs: idmapped layers are currently not supported
	[Dec27 20:05] overlayfs: idmapped layers are currently not supported
	[Dec27 20:06] overlayfs: idmapped layers are currently not supported
	[Dec27 20:07] overlayfs: idmapped layers are currently not supported
	[  +3.687478] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [79f65bc2e1dbcf7ebe07acaf2143b45f059da3390e107fc3eb87595ccc5f920d] <==
	{"level":"warn","ts":"2025-12-27T20:15:15.427336Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.441809Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.475847Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.482747Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.499113Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.507595Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.516627Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.525177Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.528304Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.532329Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.539526Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.541498Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.547520Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.550888Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.554559Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.558064Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.565332Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.572633Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.576318Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.579568Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.582805Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.590055Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.597085Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.608484Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:15.641936Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:15:15 up  1:57,  0 user,  load average: 0.42, 1.05, 1.34
	Linux ha-422549 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [790f2c013c89e320d6ae1872fcbeb0dcede9e548fae087919a1d710b26587af9] <==
	I1227 20:14:39.450626       1 main.go:301] handling current node
	I1227 20:14:49.450261       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 20:14:49.450305       1 main.go:301] handling current node
	I1227 20:14:49.450322       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1227 20:14:49.450328       1 main.go:324] Node ha-422549-m02 has CIDR [10.244.1.0/24] 
	I1227 20:14:49.450461       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1227 20:14:49.450477       1 main.go:324] Node ha-422549-m03 has CIDR [10.244.2.0/24] 
	I1227 20:14:49.450534       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1227 20:14:49.450546       1 main.go:324] Node ha-422549-m04 has CIDR [10.244.3.0/24] 
	I1227 20:14:59.445558       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 20:14:59.445661       1 main.go:301] handling current node
	I1227 20:14:59.445700       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1227 20:14:59.445735       1 main.go:324] Node ha-422549-m02 has CIDR [10.244.1.0/24] 
	I1227 20:14:59.445899       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1227 20:14:59.445935       1 main.go:324] Node ha-422549-m03 has CIDR [10.244.2.0/24] 
	I1227 20:14:59.446020       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1227 20:14:59.446055       1 main.go:324] Node ha-422549-m04 has CIDR [10.244.3.0/24] 
	I1227 20:15:09.445623       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 20:15:09.445660       1 main.go:301] handling current node
	I1227 20:15:09.445676       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1227 20:15:09.445682       1 main.go:324] Node ha-422549-m02 has CIDR [10.244.1.0/24] 
	I1227 20:15:09.445872       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1227 20:15:09.445881       1 main.go:324] Node ha-422549-m03 has CIDR [10.244.2.0/24] 
	I1227 20:15:09.446114       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1227 20:15:09.446126       1 main.go:324] Node ha-422549-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a2c772463ab69455651df640481fbedb03fe6400b56096056428e79c07be9499] <==
	I1227 20:09:16.090173       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:09:16.142608       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 20:09:16.165012       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:09:16.188215       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:16.247286       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 20:09:17.588850       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:09:17.588862       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 20:09:17.591046       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:09:17.591196       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:17.591213       1 policy_source.go:248] refreshing policies
	I1227 20:09:17.594498       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:09:17.632882       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 20:09:18.590962       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 20:09:18.719267       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:09:18.730017       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1227 20:09:18.736565       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1227 20:09:18.757260       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:09:18.776199       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:09:18.793727       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 20:09:18.793809       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	W1227 20:09:18.871915       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	W1227 20:09:38.848605       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1227 20:09:50.148007       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:09:50.298023       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:10:40.117662       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-apiserver [c3f87ac29708d39b5580f953e8ccc765b36b830cf405bc7750b8afe798a15a77] <==
	{"level":"warn","ts":"2025-12-27T20:08:34.277834Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400203fc20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277853Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400144c3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277870Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400203f2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277886Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002670b40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277902Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40029112c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277921Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001cc2f00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277917Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021472c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277951Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002a345a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277938Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002cbb2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277969Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002671c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277982Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026703c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278004Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f51c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278007Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40029fd680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278023Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400144cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278027Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002d0ef00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278040Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002cba960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278044Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002d0ef00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278056Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026714a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278062Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000ea3c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278071Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400102d2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278373Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021472c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	F1227 20:08:39.300772       1 hooks.go:204] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	{"level":"warn","ts":"2025-12-27T20:08:39.399795Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400102d2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	E1227 20:08:39.400034       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	
	
	==> kube-controller-manager [200f949dea5c60d38a5d90e0270e6343a89f068bd2083ee55915c81023b0e023] <==
	I1227 20:08:47.677940       1 serving.go:386] Generated self-signed cert in-memory
	I1227 20:08:47.685798       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1227 20:08:47.685893       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:08:47.687365       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1227 20:08:47.687564       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1227 20:08:47.687645       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1227 20:08:47.687811       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1227 20:08:57.704670       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [810850466f08e002011f0d991e32eb0109be47db69714d6e333a070593589ffc] <==
	I1227 20:09:49.817998       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.818055       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.818125       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.818182       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.818296       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.818398       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 20:09:49.823879       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.824187       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.824238       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.824323       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.826908       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m04"
	I1227 20:09:49.826980       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549"
	I1227 20:09:49.827019       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m02"
	I1227 20:09:49.827146       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m03"
	I1227 20:09:49.831582       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.831626       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.831651       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.837170       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 20:09:49.903784       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.914954       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.915054       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:09:49.915069       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:10:39.887314       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-422549-m04"
	I1227 20:10:39.888758       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-422549-m04"
	I1227 20:10:40.332581       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="PartialDisruption"
	
	
	==> kube-proxy [0dc7fc3f72aac5f705d9afdbd65e7c9da34760b5dcbc880ecf6236b8d0c7a88c] <==
	I1227 20:09:19.404089       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:09:19.491223       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:09:19.592597       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:19.592728       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1227 20:09:19.592858       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:09:19.644888       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:09:19.644944       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:09:19.649692       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:09:19.649993       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:09:19.650014       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:09:19.652082       1 config.go:200] "Starting service config controller"
	I1227 20:09:19.652103       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:09:19.652121       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:09:19.652124       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:09:19.652134       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:09:19.652138       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:09:19.652805       1 config.go:309] "Starting node config controller"
	I1227 20:09:19.652821       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:09:19.652829       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:09:19.753198       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 20:09:19.753207       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:09:19.753242       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dd811e752da4c2025246e605ecc1690aba8141353e20fb91cdad4468a1c059f9] <==
	E1227 20:08:19.506524       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 20:08:19.569107       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:08:20.320229       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:08:20.376812       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:08:21.129930       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 20:08:39.022443       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 20:08:43.570864       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:08:47.134070       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:08:48.738392       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 20:08:49.986460       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 20:08:49.987992       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 20:08:50.727843       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 20:08:50.956450       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:08:51.960069       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:08:53.165271       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:08:57.344100       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 20:08:59.543840       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:09:01.253158       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:09:01.270041       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:09:01.345742       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:09:01.466100       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 20:09:02.611833       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:09:09.548910       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:09:10.555054       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	I1227 20:09:56.031915       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:10:25 ha-422549 kubelet[804]: E1227 20:10:25.927278     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-ha-422549" containerName="etcd"
	Dec 27 20:10:28 ha-422549 kubelet[804]: E1227 20:10:28.927151     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	Dec 27 20:10:29 ha-422549 kubelet[804]: I1227 20:10:29.927768     804 kubelet.go:3323] "Trying to delete pod" pod="kube-system/kube-vip-ha-422549" podUID="27494a9a-1459-4c40-99d3-c3e21df433ef"
	Dec 27 20:10:29 ha-422549 kubelet[804]: I1227 20:10:29.944622     804 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-422549"
	Dec 27 20:10:29 ha-422549 kubelet[804]: I1227 20:10:29.944659     804 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-422549"
	Dec 27 20:11:02 ha-422549 kubelet[804]: E1227 20:11:02.926814     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:11:12 ha-422549 kubelet[804]: E1227 20:11:12.927477     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:11:13 ha-422549 kubelet[804]: E1227 20:11:13.926597     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:11:14 ha-422549 kubelet[804]: E1227 20:11:14.926505     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-ha-422549" containerName="kube-scheduler"
	Dec 27 20:11:33 ha-422549 kubelet[804]: E1227 20:11:33.928211     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	Dec 27 20:11:45 ha-422549 kubelet[804]: E1227 20:11:45.927376     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-ha-422549" containerName="etcd"
	Dec 27 20:12:25 ha-422549 kubelet[804]: E1227 20:12:25.926700     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:12:39 ha-422549 kubelet[804]: E1227 20:12:39.927819     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:12:41 ha-422549 kubelet[804]: E1227 20:12:41.928937     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:12:44 ha-422549 kubelet[804]: E1227 20:12:44.927340     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-ha-422549" containerName="kube-scheduler"
	Dec 27 20:12:52 ha-422549 kubelet[804]: E1227 20:12:52.926348     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	Dec 27 20:13:04 ha-422549 kubelet[804]: E1227 20:13:04.927081     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-ha-422549" containerName="etcd"
	Dec 27 20:13:35 ha-422549 kubelet[804]: E1227 20:13:35.927017     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:13:53 ha-422549 kubelet[804]: E1227 20:13:53.926931     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:14:05 ha-422549 kubelet[804]: E1227 20:14:05.927026     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	Dec 27 20:14:09 ha-422549 kubelet[804]: E1227 20:14:09.926884     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:14:11 ha-422549 kubelet[804]: E1227 20:14:11.927165     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-ha-422549" containerName="kube-scheduler"
	Dec 27 20:14:21 ha-422549 kubelet[804]: E1227 20:14:21.927398     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-ha-422549" containerName="etcd"
	Dec 27 20:14:55 ha-422549 kubelet[804]: E1227 20:14:55.927938     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:15:04 ha-422549 kubelet[804]: E1227 20:15:04.926424     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-422549 -n ha-422549
helpers_test.go:270: (dbg) Run:  kubectl --context ha-422549 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (512.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (5.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-422549 node delete m03 --alsologtostderr -v 5: exit status 83 (191.890599ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-422549-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-422549"

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:15:17.894922  335331 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:15:17.895753  335331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:15:17.895770  335331 out.go:374] Setting ErrFile to fd 2...
	I1227 20:15:17.895777  335331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:15:17.896580  335331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:15:17.896932  335331 mustload.go:66] Loading cluster: ha-422549
	I1227 20:15:17.897486  335331 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:17.898011  335331 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:15:17.916105  335331 host.go:66] Checking if "ha-422549" exists ...
	I1227 20:15:17.916443  335331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:15:17.991907  335331 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-27 20:15:17.982473817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:15:17.992312  335331 cli_runner.go:164] Run: docker container inspect ha-422549-m02 --format={{.State.Status}}
	I1227 20:15:18.009791  335331 host.go:66] Checking if "ha-422549-m02" exists ...
	I1227 20:15:18.010291  335331 cli_runner.go:164] Run: docker container inspect ha-422549-m03 --format={{.State.Status}}
	I1227 20:15:18.033238  335331 out.go:179] * The control-plane node ha-422549-m03 host is not running: state=Stopped
	I1227 20:15:18.036071  335331 out.go:179]   To start a cluster, run: "minikube start -p ha-422549"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-linux-arm64 -p ha-422549 node delete m03 --alsologtostderr -v 5": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5: exit status 7 (582.77218ms)

                                                
                                                
-- stdout --
	ha-422549
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-422549-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-422549-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-422549-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:15:18.098713  335387 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:15:18.098861  335387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:15:18.098919  335387 out.go:374] Setting ErrFile to fd 2...
	I1227 20:15:18.098963  335387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:15:18.099484  335387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:15:18.099816  335387 out.go:368] Setting JSON to false
	I1227 20:15:18.099871  335387 mustload.go:66] Loading cluster: ha-422549
	I1227 20:15:18.100707  335387 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:18.100789  335387 status.go:174] checking status of ha-422549 ...
	I1227 20:15:18.101766  335387 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:15:18.102780  335387 notify.go:221] Checking for updates...
	I1227 20:15:18.128084  335387 status.go:371] ha-422549 host status = "Running" (err=<nil>)
	I1227 20:15:18.128109  335387 host.go:66] Checking if "ha-422549" exists ...
	I1227 20:15:18.128434  335387 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:15:18.157582  335387 host.go:66] Checking if "ha-422549" exists ...
	I1227 20:15:18.158055  335387 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:15:18.158155  335387 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:18.178732  335387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:18.276872  335387 ssh_runner.go:195] Run: systemctl --version
	I1227 20:15:18.285194  335387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:15:18.298619  335387 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:15:18.354681  335387 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-27 20:15:18.345850175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:15:18.355192  335387 kubeconfig.go:125] found "ha-422549" server: "https://192.168.49.254:8443"
	I1227 20:15:18.355230  335387 api_server.go:166] Checking apiserver status ...
	I1227 20:15:18.355277  335387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:15:18.366547  335387 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1085/cgroup
	I1227 20:15:18.374353  335387 api_server.go:192] apiserver freezer: "4:freezer:/docker/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/crio/crio-a2c772463ab69455651df640481fbedb03fe6400b56096056428e79c07be9499"
	I1227 20:15:18.374422  335387 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/crio/crio-a2c772463ab69455651df640481fbedb03fe6400b56096056428e79c07be9499/freezer.state
	I1227 20:15:18.381752  335387 api_server.go:214] freezer state: "THAWED"
	I1227 20:15:18.381787  335387 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1227 20:15:18.391150  335387 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1227 20:15:18.391180  335387 status.go:463] ha-422549 apiserver status = Running (err=<nil>)
	I1227 20:15:18.391201  335387 status.go:176] ha-422549 status: &{Name:ha-422549 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:15:18.391259  335387 status.go:174] checking status of ha-422549-m02 ...
	I1227 20:15:18.391627  335387 cli_runner.go:164] Run: docker container inspect ha-422549-m02 --format={{.State.Status}}
	I1227 20:15:18.411265  335387 status.go:371] ha-422549-m02 host status = "Running" (err=<nil>)
	I1227 20:15:18.411294  335387 host.go:66] Checking if "ha-422549-m02" exists ...
	I1227 20:15:18.411602  335387 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:15:18.432471  335387 host.go:66] Checking if "ha-422549-m02" exists ...
	I1227 20:15:18.432797  335387 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:15:18.432848  335387 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:18.452217  335387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:15:18.548057  335387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:15:18.562712  335387 kubeconfig.go:125] found "ha-422549" server: "https://192.168.49.254:8443"
	I1227 20:15:18.562743  335387 api_server.go:166] Checking apiserver status ...
	I1227 20:15:18.562795  335387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1227 20:15:18.573534  335387 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:15:18.573557  335387 status.go:463] ha-422549-m02 apiserver status = Running (err=<nil>)
	I1227 20:15:18.573566  335387 status.go:176] ha-422549-m02 status: &{Name:ha-422549-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:15:18.573582  335387 status.go:174] checking status of ha-422549-m03 ...
	I1227 20:15:18.573922  335387 cli_runner.go:164] Run: docker container inspect ha-422549-m03 --format={{.State.Status}}
	I1227 20:15:18.592195  335387 status.go:371] ha-422549-m03 host status = "Stopped" (err=<nil>)
	I1227 20:15:18.592235  335387 status.go:384] host is not running, skipping remaining checks
	I1227 20:15:18.592256  335387 status.go:176] ha-422549-m03 status: &{Name:ha-422549-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:15:18.592277  335387 status.go:174] checking status of ha-422549-m04 ...
	I1227 20:15:18.592584  335387 cli_runner.go:164] Run: docker container inspect ha-422549-m04 --format={{.State.Status}}
	I1227 20:15:18.616286  335387 status.go:371] ha-422549-m04 host status = "Stopped" (err=<nil>)
	I1227 20:15:18.616306  335387 status.go:384] host is not running, skipping remaining checks
	I1227 20:15:18.616313  335387 status.go:176] ha-422549-m04 status: &{Name:ha-422549-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-422549
helpers_test.go:244: (dbg) docker inspect ha-422549:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf",
	        "Created": "2025-12-27T20:03:01.682141141Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 319429,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:07:23.280905445Z",
	            "FinishedAt": "2025-12-27T20:07:22.683216546Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/hostname",
	        "HostsPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/hosts",
	        "LogPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf-json.log",
	        "Name": "/ha-422549",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-422549:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-422549",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf",
	                "LowerDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/merged",
	                "UpperDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/diff",
	                "WorkDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-422549",
	                "Source": "/var/lib/docker/volumes/ha-422549/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422549",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422549",
	                "name.minikube.sigs.k8s.io": "ha-422549",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "28e77342f2c4751026f399b040de05177304716ac6aab83b39b3d9c47cebffe7",
	            "SandboxKey": "/var/run/docker/netns/28e77342f2c4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422549": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:36:09:aa:37:bf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9521cb9225c5842f69a8435c5cf5485b75f9a8b2c68158742ff27c2be32f5951",
	                    "EndpointID": "a460c21f8bbd3e3cd9f593131304327baa8422b2d75f0ce1ac3c5c098867a970",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422549",
	                        "53fd780c3df5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-422549 -n ha-422549
helpers_test.go:253: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-422549 logs -n 25: (2.17374269s)
helpers_test.go:261: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-422549 ssh -n ha-422549-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m02 sudo cat /home/docker/cp-test_ha-422549-m03_ha-422549-m02.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m03:/home/docker/cp-test.txt ha-422549-m04:/home/docker/cp-test_ha-422549-m03_ha-422549-m04.txt               │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test_ha-422549-m03_ha-422549-m04.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp testdata/cp-test.txt ha-422549-m04:/home/docker/cp-test.txt                                                             │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3848759327/001/cp-test_ha-422549-m04.txt │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549:/home/docker/cp-test_ha-422549-m04_ha-422549.txt                       │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549 sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549.txt                                                 │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549-m02:/home/docker/cp-test_ha-422549-m04_ha-422549-m02.txt               │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m02 sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549-m02.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549-m03:/home/docker/cp-test_ha-422549-m04_ha-422549-m03.txt               │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m03 sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549-m03.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ node    │ ha-422549 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ node    │ ha-422549 node start m02 --alsologtostderr -v 5                                                                                      │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ node    │ ha-422549 node list --alsologtostderr -v 5                                                                                           │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │                     │
	│ stop    │ ha-422549 stop --alsologtostderr -v 5                                                                                                │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:07 UTC │
	│ start   │ ha-422549 start --wait true --alsologtostderr -v 5                                                                                   │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:07 UTC │                     │
	│ node    │ ha-422549 node list --alsologtostderr -v 5                                                                                           │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:15 UTC │                     │
	│ node    │ ha-422549 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:15 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:07:23
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:07:23.018829  319301 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:07:23.019045  319301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:07:23.019069  319301 out.go:374] Setting ErrFile to fd 2...
	I1227 20:07:23.019104  319301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:07:23.019417  319301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:07:23.019931  319301 out.go:368] Setting JSON to false
	I1227 20:07:23.020994  319301 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6595,"bootTime":1766859448,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:07:23.021172  319301 start.go:143] virtualization:  
	I1227 20:07:23.026478  319301 out.go:179] * [ha-422549] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:07:23.029624  319301 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:07:23.029657  319301 notify.go:221] Checking for updates...
	I1227 20:07:23.035732  319301 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:07:23.038626  319301 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:07:23.041521  319301 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:07:23.044303  319301 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:07:23.047245  319301 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:07:23.050815  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:07:23.050954  319301 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:07:23.074861  319301 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:07:23.074978  319301 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:07:23.134894  319301 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 20:07:23.1261821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:07:23.135004  319301 docker.go:319] overlay module found
	I1227 20:07:23.138113  319301 out.go:179] * Using the docker driver based on existing profile
	I1227 20:07:23.140925  319301 start.go:309] selected driver: docker
	I1227 20:07:23.140943  319301 start.go:928] validating driver "docker" against &{Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:07:23.141082  319301 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:07:23.141181  319301 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:07:23.197269  319301 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 20:07:23.188068839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:07:23.197711  319301 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:07:23.197745  319301 cni.go:84] Creating CNI manager for ""
	I1227 20:07:23.197797  319301 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1227 20:07:23.197857  319301 start.go:353] cluster config:
	{Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:07:23.202906  319301 out.go:179] * Starting "ha-422549" primary control-plane node in "ha-422549" cluster
	I1227 20:07:23.205659  319301 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:07:23.208577  319301 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:07:23.211352  319301 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:07:23.211401  319301 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:07:23.211416  319301 cache.go:65] Caching tarball of preloaded images
	I1227 20:07:23.211429  319301 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:07:23.211499  319301 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:07:23.211509  319301 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:07:23.211655  319301 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:07:23.229712  319301 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:07:23.229734  319301 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:07:23.229749  319301 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:07:23.229779  319301 start.go:360] acquireMachinesLock for ha-422549: {Name:mk939e8ee4c2bedc86cc6a99d76298e7b2a26ce2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:07:23.229835  319301 start.go:364] duration metric: took 35.657µs to acquireMachinesLock for "ha-422549"
	I1227 20:07:23.229869  319301 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:07:23.229878  319301 fix.go:54] fixHost starting: 
	I1227 20:07:23.230138  319301 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:07:23.246992  319301 fix.go:112] recreateIfNeeded on ha-422549: state=Stopped err=<nil>
	W1227 20:07:23.247025  319301 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:07:23.250226  319301 out.go:252] * Restarting existing docker container for "ha-422549" ...
	I1227 20:07:23.250324  319301 cli_runner.go:164] Run: docker start ha-422549
	I1227 20:07:23.503347  319301 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:07:23.526447  319301 kic.go:430] container "ha-422549" state is running.
	I1227 20:07:23.526916  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:07:23.555271  319301 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:07:23.555509  319301 machine.go:94] provisionDockerMachine start ...
	I1227 20:07:23.555569  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:23.577158  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:23.577524  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1227 20:07:23.577542  319301 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:07:23.578121  319301 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44738->127.0.0.1:33173: read: connection reset by peer
	I1227 20:07:26.720977  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549
	
	I1227 20:07:26.721006  319301 ubuntu.go:182] provisioning hostname "ha-422549"
	I1227 20:07:26.721067  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:26.738818  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:26.739131  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1227 20:07:26.739148  319301 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549 && echo "ha-422549" | sudo tee /etc/hostname
	I1227 20:07:26.886109  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549
	
	I1227 20:07:26.886195  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:26.903863  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:26.904173  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1227 20:07:26.904194  319301 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:07:27.041724  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:07:27.041750  319301 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:07:27.041786  319301 ubuntu.go:190] setting up certificates
	I1227 20:07:27.041803  319301 provision.go:84] configureAuth start
	I1227 20:07:27.041869  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:07:27.060364  319301 provision.go:143] copyHostCerts
	I1227 20:07:27.060422  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:07:27.060455  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:07:27.060473  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:07:27.060550  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:07:27.060645  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:07:27.060668  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:07:27.060679  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:07:27.060709  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:07:27.060761  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:07:27.060783  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:07:27.060791  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:07:27.060818  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:07:27.060870  319301 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549 san=[127.0.0.1 192.168.49.2 ha-422549 localhost minikube]
	I1227 20:07:27.239677  319301 provision.go:177] copyRemoteCerts
	I1227 20:07:27.239745  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:07:27.239800  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:27.259369  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:07:27.364829  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:07:27.364890  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:07:27.382288  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:07:27.382362  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1227 20:07:27.399154  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:07:27.399213  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:07:27.417099  319301 provision.go:87] duration metric: took 375.277706ms to configureAuth
	I1227 20:07:27.417133  319301 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:07:27.417387  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:07:27.417527  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:27.434441  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:27.434764  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1227 20:07:27.434789  319301 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:07:27.806912  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:07:27.806938  319301 machine.go:97] duration metric: took 4.251419469s to provisionDockerMachine
	I1227 20:07:27.806950  319301 start.go:293] postStartSetup for "ha-422549" (driver="docker")
	I1227 20:07:27.806961  319301 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:07:27.807018  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:07:27.807063  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:27.827185  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:07:27.924757  319301 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:07:27.927910  319301 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:07:27.927939  319301 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:07:27.927951  319301 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:07:27.928034  319301 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:07:27.928163  319301 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:07:27.928176  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:07:27.928319  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:07:27.935125  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:07:27.951297  319301 start.go:296] duration metric: took 144.328969ms for postStartSetup
	I1227 20:07:27.951425  319301 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:07:27.951489  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:27.968679  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:07:28.062963  319301 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:07:28.068245  319301 fix.go:56] duration metric: took 4.838360246s for fixHost
	I1227 20:07:28.068273  319301 start.go:83] releasing machines lock for "ha-422549", held for 4.838415218s
	I1227 20:07:28.068391  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:07:28.086189  319301 ssh_runner.go:195] Run: cat /version.json
	I1227 20:07:28.086242  319301 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:07:28.086251  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:28.086297  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:28.112515  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:07:28.119040  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:07:28.213229  319301 ssh_runner.go:195] Run: systemctl --version
	I1227 20:07:28.307265  319301 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:07:28.344982  319301 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:07:28.349307  319301 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:07:28.349416  319301 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:07:28.357039  319301 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:07:28.357061  319301 start.go:496] detecting cgroup driver to use...
	I1227 20:07:28.357091  319301 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:07:28.357187  319301 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:07:28.372341  319301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:07:28.385115  319301 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:07:28.385188  319301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:07:28.400803  319301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:07:28.413692  319301 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:07:28.520682  319301 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:07:28.638372  319301 docker.go:234] disabling docker service ...
	I1227 20:07:28.638476  319301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:07:28.652726  319301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:07:28.665221  319301 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:07:28.769753  319301 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:07:28.887106  319301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:07:28.901250  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:07:28.915594  319301 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:07:28.915656  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.923915  319301 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:07:28.924023  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.932251  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.940443  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.948974  319301 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:07:28.956576  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.964831  319301 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.973077  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.981210  319301 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:07:28.988289  319301 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:07:28.995419  319301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:07:29.102806  319301 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:07:29.272446  319301 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:07:29.272527  319301 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:07:29.276338  319301 start.go:574] Will wait 60s for crictl version
	I1227 20:07:29.276409  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:07:29.279905  319301 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:07:29.303871  319301 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:07:29.303984  319301 ssh_runner.go:195] Run: crio --version
	I1227 20:07:29.330697  319301 ssh_runner.go:195] Run: crio --version
	I1227 20:07:29.362339  319301 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:07:29.365125  319301 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:07:29.381233  319301 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:07:29.385291  319301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:07:29.396534  319301 kubeadm.go:884] updating cluster {Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:07:29.396713  319301 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:07:29.396766  319301 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:07:29.430374  319301 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:07:29.430399  319301 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:07:29.430457  319301 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:07:29.459783  319301 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:07:29.459805  319301 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:07:29.459813  319301 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I1227 20:07:29.459907  319301 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:07:29.459984  319301 ssh_runner.go:195] Run: crio config
	I1227 20:07:29.529648  319301 cni.go:84] Creating CNI manager for ""
	I1227 20:07:29.529684  319301 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1227 20:07:29.529702  319301 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:07:29.529745  319301 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422549 NodeName:ha-422549 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:07:29.529880  319301 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422549"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:07:29.529906  319301 kube-vip.go:115] generating kube-vip config ...
	I1227 20:07:29.529981  319301 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 20:07:29.541823  319301 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:07:29.541926  319301 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 20:07:29.541995  319301 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:07:29.549349  319301 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:07:29.549419  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1227 20:07:29.556490  319301 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1227 20:07:29.568355  319301 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:07:29.580790  319301 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1227 20:07:29.593175  319301 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 20:07:29.606173  319301 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:07:29.609837  319301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:07:29.619217  319301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:07:29.735123  319301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:07:29.750389  319301 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.2
	I1227 20:07:29.750412  319301 certs.go:195] generating shared ca certs ...
	I1227 20:07:29.750427  319301 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:29.750619  319301 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:07:29.750682  319301 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:07:29.750699  319301 certs.go:257] generating profile certs ...
	I1227 20:07:29.750812  319301 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key
	I1227 20:07:29.751056  319301 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.743f7ef3
	I1227 20:07:29.751077  319301 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt.743f7ef3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1227 20:07:30.216987  319301 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt.743f7ef3 ...
	I1227 20:07:30.217024  319301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt.743f7ef3: {Name:mk5110c0017b8f4cda34fa079f107b622b8f9c47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:30.217226  319301 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.743f7ef3 ...
	I1227 20:07:30.217243  319301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.743f7ef3: {Name:mkb171a8982d80a151baacbc9fe03fa941196fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:30.217342  319301 certs.go:382] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt.743f7ef3 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt
	I1227 20:07:30.217509  319301 certs.go:386] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.743f7ef3 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key
	I1227 20:07:30.217676  319301 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key
	I1227 20:07:30.217696  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:07:30.217721  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:07:30.217741  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:07:30.217759  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:07:30.217776  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:07:30.217799  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:07:30.217821  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:07:30.217837  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:07:30.217893  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:07:30.217940  319301 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:07:30.217953  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:07:30.217981  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:07:30.218009  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:07:30.218040  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:07:30.218095  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:07:30.218156  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:07:30.218174  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:07:30.218188  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:07:30.218745  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:07:30.239060  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:07:30.258056  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:07:30.279983  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:07:30.299163  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:07:30.317066  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:07:30.333792  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:07:30.363380  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:07:30.383880  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:07:30.402563  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:07:30.424158  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:07:30.441364  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:07:30.455028  319301 ssh_runner.go:195] Run: openssl version
	I1227 20:07:30.462193  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:07:30.476783  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:07:30.488736  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:07:30.492787  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:07:30.492869  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:07:30.601338  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:07:30.618710  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:07:30.629367  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:07:30.641908  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:07:30.646861  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:07:30.646946  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:07:30.713797  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:07:30.723031  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:07:30.735659  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:07:30.746061  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:07:30.750487  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:07:30.750578  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:07:30.818577  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:07:30.827800  319301 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:07:30.835007  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:07:30.906833  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:07:30.969599  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:07:31.044468  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:07:31.106453  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:07:31.155733  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:07:31.197366  319301 kubeadm.go:401] StartCluster: {Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:07:31.197537  319301 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:07:31.197613  319301 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:07:31.226634  319301 cri.go:96] found id: "c3f87ac29708d39b5580f953e8ccc765b36b830cf405bc7750b8afe798a15a77"
	I1227 20:07:31.226665  319301 cri.go:96] found id: "79f65bc2e1dbcf7ebe07acaf2143b45f059da3390e107fc3eb87595ccc5f920d"
	I1227 20:07:31.226671  319301 cri.go:96] found id: "dd811e752da4c2025246e605ecc1690aba8141353e20fb91cdad4468a1c059f9"
	I1227 20:07:31.226675  319301 cri.go:96] found id: "feeed30c26dbbb06391e6c43a6d6041af28ce218eaf23eec819dc38cda9444e8"
	I1227 20:07:31.226679  319301 cri.go:96] found id: "bbf24a80fc638071d98a1cc08ab823b436cc206cb456eac7a8be7958d11889db"
	I1227 20:07:31.226683  319301 cri.go:96] found id: ""
	I1227 20:07:31.226745  319301 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:07:31.244824  319301 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:07:31Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:07:31.244903  319301 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:07:31.257811  319301 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:07:31.257842  319301 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:07:31.257908  319301 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:07:31.270645  319301 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:07:31.271073  319301 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-422549" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:07:31.271185  319301 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-272475/kubeconfig needs updating (will repair): [kubeconfig missing "ha-422549" cluster setting kubeconfig missing "ha-422549" context setting]
	I1227 20:07:31.271518  319301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:31.272112  319301 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 20:07:31.272794  319301 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 20:07:31.272816  319301 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 20:07:31.272823  319301 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 20:07:31.272851  319301 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1227 20:07:31.272828  319301 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 20:07:31.272895  319301 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 20:07:31.272900  319301 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 20:07:31.273215  319301 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:07:31.284048  319301 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1227 20:07:31.284081  319301 kubeadm.go:602] duration metric: took 26.232251ms to restartPrimaryControlPlane
	I1227 20:07:31.284090  319301 kubeadm.go:403] duration metric: took 86.73489ms to StartCluster
	I1227 20:07:31.284107  319301 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:31.284175  319301 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:07:31.284780  319301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:31.284997  319301 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:07:31.285023  319301 start.go:242] waiting for startup goroutines ...
	I1227 20:07:31.285032  319301 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:07:31.285574  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:07:31.290925  319301 out.go:179] * Enabled addons: 
	I1227 20:07:31.294082  319301 addons.go:530] duration metric: took 9.037764ms for enable addons: enabled=[]
	I1227 20:07:31.294137  319301 start.go:247] waiting for cluster config update ...
	I1227 20:07:31.294152  319301 start.go:256] writing updated cluster config ...
	I1227 20:07:31.297568  319301 out.go:203] 
	I1227 20:07:31.300820  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:07:31.300937  319301 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:07:31.304320  319301 out.go:179] * Starting "ha-422549-m02" control-plane node in "ha-422549" cluster
	I1227 20:07:31.306983  319301 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:07:31.309971  319301 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:07:31.312773  319301 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:07:31.312796  319301 cache.go:65] Caching tarball of preloaded images
	I1227 20:07:31.312889  319301 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:07:31.312906  319301 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:07:31.313029  319301 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:07:31.313257  319301 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:07:31.349637  319301 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:07:31.349662  319301 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:07:31.349676  319301 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:07:31.349708  319301 start.go:360] acquireMachinesLock for ha-422549-m02: {Name:mk8fc7aa5d6c41749cc4b9db094e3fd243d8b868 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:07:31.349765  319301 start.go:364] duration metric: took 37.299µs to acquireMachinesLock for "ha-422549-m02"
	I1227 20:07:31.349791  319301 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:07:31.349796  319301 fix.go:54] fixHost starting: m02
	I1227 20:07:31.350055  319301 cli_runner.go:164] Run: docker container inspect ha-422549-m02 --format={{.State.Status}}
	I1227 20:07:31.391676  319301 fix.go:112] recreateIfNeeded on ha-422549-m02: state=Stopped err=<nil>
	W1227 20:07:31.391706  319301 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:07:31.394953  319301 out.go:252] * Restarting existing docker container for "ha-422549-m02" ...
	I1227 20:07:31.395043  319301 cli_runner.go:164] Run: docker start ha-422549-m02
	I1227 20:07:31.777922  319301 cli_runner.go:164] Run: docker container inspect ha-422549-m02 --format={{.State.Status}}
	I1227 20:07:31.805184  319301 kic.go:430] container "ha-422549-m02" state is running.
	I1227 20:07:31.805591  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:07:31.841697  319301 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:07:31.841951  319301 machine.go:94] provisionDockerMachine start ...
	I1227 20:07:31.842022  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:31.865663  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:31.865982  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1227 20:07:31.865998  319301 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:07:31.866584  319301 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58412->127.0.0.1:33178: read: connection reset by peer
	I1227 20:07:35.045099  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m02
	
	I1227 20:07:35.045161  319301 ubuntu.go:182] provisioning hostname "ha-422549-m02"
	I1227 20:07:35.045260  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:35.074417  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:35.074732  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1227 20:07:35.074750  319301 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549-m02 && echo "ha-422549-m02" | sudo tee /etc/hostname
	I1227 20:07:35.272951  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m02
	
	I1227 20:07:35.273095  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:35.310855  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:35.311167  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1227 20:07:35.311187  319301 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:07:35.489398  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:07:35.489483  319301 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:07:35.489515  319301 ubuntu.go:190] setting up certificates
	I1227 20:07:35.489552  319301 provision.go:84] configureAuth start
	I1227 20:07:35.489651  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:07:35.519140  319301 provision.go:143] copyHostCerts
	I1227 20:07:35.519180  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:07:35.519212  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:07:35.519219  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:07:35.519305  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:07:35.519384  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:07:35.519400  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:07:35.519405  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:07:35.519428  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:07:35.519467  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:07:35.519482  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:07:35.519486  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:07:35.519508  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:07:35.519552  319301 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549-m02 san=[127.0.0.1 192.168.49.3 ha-422549-m02 localhost minikube]
	I1227 20:07:35.673804  319301 provision.go:177] copyRemoteCerts
	I1227 20:07:35.676274  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:07:35.676362  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:35.700203  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:07:35.810686  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:07:35.810802  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 20:07:35.827198  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:07:35.827254  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:07:35.847940  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:07:35.848040  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:07:35.870095  319301 provision.go:87] duration metric: took 380.509887ms to configureAuth
	I1227 20:07:35.870124  319301 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:07:35.870422  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:07:35.870563  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:35.893611  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:35.893918  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1227 20:07:35.893932  319301 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:07:36.282435  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:07:36.282459  319301 machine.go:97] duration metric: took 4.440490595s to provisionDockerMachine
	I1227 20:07:36.282470  319301 start.go:293] postStartSetup for "ha-422549-m02" (driver="docker")
	I1227 20:07:36.282505  319301 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:07:36.282595  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:07:36.282666  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:36.301003  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:07:36.402628  319301 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:07:36.406068  319301 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:07:36.406097  319301 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:07:36.406108  319301 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:07:36.406247  319301 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:07:36.406355  319301 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:07:36.406371  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:07:36.406502  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:07:36.414126  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:07:36.431291  319301 start.go:296] duration metric: took 148.805898ms for postStartSetup
	I1227 20:07:36.431373  319301 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:07:36.431417  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:36.449358  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:07:36.546713  319301 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:07:36.551629  319301 fix.go:56] duration metric: took 5.201823785s for fixHost
	I1227 20:07:36.551655  319301 start.go:83] releasing machines lock for "ha-422549-m02", held for 5.20187627s
	I1227 20:07:36.551729  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:07:36.571695  319301 out.go:179] * Found network options:
	I1227 20:07:36.574736  319301 out.go:179]   - NO_PROXY=192.168.49.2
	W1227 20:07:36.577654  319301 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:07:36.577694  319301 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 20:07:36.577781  319301 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:07:36.577827  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:36.578074  319301 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:07:36.578134  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:36.598248  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:07:36.598898  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:07:36.873888  319301 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:07:36.879823  319301 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:07:36.879937  319301 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:07:36.899888  319301 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:07:36.899953  319301 start.go:496] detecting cgroup driver to use...
	I1227 20:07:36.899997  319301 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:07:36.900076  319301 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:07:36.928970  319301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:07:36.947727  319301 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:07:36.947845  319301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:07:36.967863  319301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:07:36.998332  319301 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:07:37.167619  319301 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:07:37.326628  319301 docker.go:234] disabling docker service ...
	I1227 20:07:37.326748  319301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:07:37.341981  319301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:07:37.354777  319301 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:07:37.613409  319301 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:07:37.870750  319301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:07:37.886152  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:07:37.906254  319301 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:07:37.906377  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.926031  319301 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:07:37.926143  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.937485  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.946425  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.958890  319301 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:07:37.968858  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.978269  319301 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.986277  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.995011  319301 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:07:38.002468  319301 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:07:38.010027  319301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:07:38.207437  319301 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:09:08.647737  319301 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.440260784s)
	I1227 20:09:08.647767  319301 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:09:08.647821  319301 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:09:08.651981  319301 start.go:574] Will wait 60s for crictl version
	I1227 20:09:08.652048  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:09:08.655690  319301 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:09:08.681479  319301 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:09:08.681565  319301 ssh_runner.go:195] Run: crio --version
	I1227 20:09:08.713332  319301 ssh_runner.go:195] Run: crio --version
	I1227 20:09:08.746336  319301 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:09:08.749205  319301 out.go:179]   - env NO_PROXY=192.168.49.2
	I1227 20:09:08.752182  319301 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:09:08.768090  319301 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:09:08.771937  319301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:09:08.781622  319301 mustload.go:66] Loading cluster: ha-422549
	I1227 20:09:08.781869  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:09:08.782144  319301 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:09:08.798634  319301 host.go:66] Checking if "ha-422549" exists ...
	I1227 20:09:08.798913  319301 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.3
	I1227 20:09:08.798926  319301 certs.go:195] generating shared ca certs ...
	I1227 20:09:08.798941  319301 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:09:08.799067  319301 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:09:08.799116  319301 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:09:08.799129  319301 certs.go:257] generating profile certs ...
	I1227 20:09:08.799210  319301 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key
	I1227 20:09:08.799280  319301 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.982843aa
	I1227 20:09:08.799324  319301 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key
	I1227 20:09:08.799337  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:09:08.799350  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:09:08.799367  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:09:08.799386  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:09:08.799406  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:09:08.799422  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:09:08.799438  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:09:08.799453  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:09:08.799510  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:09:08.799546  319301 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:09:08.799559  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:09:08.799588  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:09:08.799617  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:09:08.799646  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:09:08.799694  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:09:08.799727  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:09:08.799744  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:09:08.799758  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:09:08.799822  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:09:08.817939  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:09:08.909783  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1227 20:09:08.913788  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1227 20:09:08.922116  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1227 20:09:08.925553  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1227 20:09:08.933735  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1227 20:09:08.937584  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1227 20:09:08.946742  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1227 20:09:08.951033  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1227 20:09:08.959969  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1227 20:09:08.963648  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1227 20:09:08.971803  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1227 20:09:08.975349  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1227 20:09:08.983445  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:09:09.001559  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:09:09.020775  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:09:09.041958  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:09:09.059931  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:09:09.076796  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:09:09.095447  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:09:09.113037  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:09:09.130903  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:09:09.148555  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:09:09.167075  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:09:09.184251  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1227 20:09:09.197053  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1227 20:09:09.209869  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1227 20:09:09.223329  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1227 20:09:09.236109  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1227 20:09:09.249524  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1227 20:09:09.262558  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (728 bytes)
	I1227 20:09:09.278766  319301 ssh_runner.go:195] Run: openssl version
	I1227 20:09:09.288173  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:09:09.303263  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:09:09.312839  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:09:09.317343  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:09:09.317435  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:09:09.358946  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:09:09.366603  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:09:09.374144  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:09:09.381566  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:09:09.385396  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:09:09.385483  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:09:09.427186  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:09:09.435033  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:09:09.442740  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:09:09.450736  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:09:09.455313  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:09:09.455406  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:09:09.506456  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:09:09.515191  319301 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:09:09.519143  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:09:09.560830  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:09:09.601733  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:09:09.642802  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:09:09.683557  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:09:09.724343  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:09:09.764937  319301 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.35.0 crio true true} ...
	I1227 20:09:09.765044  319301 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:09:09.765076  319301 kube-vip.go:115] generating kube-vip config ...
	I1227 20:09:09.765126  319301 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 20:09:09.777907  319301 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:09:09.778008  319301 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 20:09:09.778101  319301 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:09:09.785542  319301 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:09:09.785669  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1227 20:09:09.793814  319301 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 20:09:09.808509  319301 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:09:09.822210  319301 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 20:09:09.836025  319301 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:09:09.840416  319301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:09:09.851735  319301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:09:09.987416  319301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:09:10.000958  319301 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:09:10.001514  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:09:10.006801  319301 out.go:179] * Verifying Kubernetes components...
	I1227 20:09:10.009655  319301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:09:10.156826  319301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:09:10.171179  319301 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1227 20:09:10.171261  319301 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1227 20:09:10.171542  319301 node_ready.go:35] waiting up to 6m0s for node "ha-422549-m02" to be "Ready" ...
	I1227 20:09:13.107692  319301 node_ready.go:49] node "ha-422549-m02" is "Ready"
	I1227 20:09:13.107720  319301 node_ready.go:38] duration metric: took 2.936159281s for node "ha-422549-m02" to be "Ready" ...
	I1227 20:09:13.107734  319301 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:09:13.107789  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:13.607926  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:14.107987  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:14.607959  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:15.108981  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:15.607952  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:16.108673  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:16.608170  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:17.108757  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:17.608081  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:18.108738  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:18.608607  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:19.108699  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:19.608389  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:20.107908  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:20.608001  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:21.108548  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:21.608334  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:22.108180  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:22.607875  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:23.108675  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:23.608625  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:24.108180  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:24.608668  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:25.108754  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:25.607950  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:26.107930  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:26.607944  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:27.108744  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:27.608613  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:28.108398  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:28.608347  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:29.108513  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:29.607943  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:30.108298  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:30.607986  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:31.108862  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:31.608852  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:32.108838  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:32.608448  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:33.108526  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:33.608595  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:34.108250  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:34.607930  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:35.107952  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:35.608214  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:36.108509  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:36.608114  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:37.108454  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:37.607937  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:38.108594  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:38.607928  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:39.107995  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:39.608876  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:40.107937  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:40.607935  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:41.108437  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:41.607967  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:42.110329  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:42.608527  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:43.108197  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:43.608003  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:44.108494  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:44.608788  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:45.108779  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:45.608786  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:46.108080  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:46.608527  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:47.108485  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:47.608412  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:48.108174  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:48.608559  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:49.108719  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:49.608778  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:50.108396  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:50.608188  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:51.108854  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:51.607920  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:52.108260  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:52.607897  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:53.108165  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:53.608820  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:54.107921  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:54.608807  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:55.107966  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:55.608683  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:56.108704  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:56.608641  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:57.107949  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:57.608891  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:58.107911  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:58.607913  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:59.108124  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:59.608080  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:00.126668  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:00.607936  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:01.107972  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:01.607964  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:02.108918  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:02.608274  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:03.108889  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:03.607948  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:04.108838  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:04.608617  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:05.108707  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:05.608552  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:06.108350  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:06.607927  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:07.108601  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:07.607942  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:08.108292  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:08.607954  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:09.108836  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:09.608829  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:10.108562  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:10.108721  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:10.138615  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:10.138637  319301 cri.go:96] found id: ""
	I1227 20:10:10.138646  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:10.138711  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:10.143115  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:10.143189  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:10.173558  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:10.173579  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:10.173584  319301 cri.go:96] found id: ""
	I1227 20:10:10.173592  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:10.173653  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:10.178008  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:10.182191  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:10.182272  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:10.220643  319301 cri.go:96] found id: ""
	I1227 20:10:10.220668  319301 logs.go:282] 0 containers: []
	W1227 20:10:10.220677  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:10.220684  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:10.220746  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:10.250139  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:10.250162  319301 cri.go:96] found id: ""
	I1227 20:10:10.250170  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:10.250228  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:10.253966  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:10.254039  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:10.290311  319301 cri.go:96] found id: ""
	I1227 20:10:10.290334  319301 logs.go:282] 0 containers: []
	W1227 20:10:10.290343  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:10.290349  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:10.290422  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:10.319925  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:10.319948  319301 cri.go:96] found id: ""
	I1227 20:10:10.319974  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:10.320031  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:10.323821  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:10.323902  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:10.352069  319301 cri.go:96] found id: ""
	I1227 20:10:10.352091  319301 logs.go:282] 0 containers: []
	W1227 20:10:10.352100  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:10.352115  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:10.352127  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:10.451345  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:10.451385  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:10.469929  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:10.469961  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:10.875866  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:10.868032    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.868914    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.869711    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.870583    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.872332    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:10.868032    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.868914    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.869711    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.870583    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.872332    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:10.875894  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:10.875909  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:10.936407  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:10.936442  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:10.983671  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:10.983707  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:11.017260  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:11.017294  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:11.052563  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:11.052594  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:11.130184  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:11.130222  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:11.162524  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:11.162557  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:13.706075  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:13.716624  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:13.716698  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:13.747368  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:13.747388  319301 cri.go:96] found id: ""
	I1227 20:10:13.747396  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:13.747456  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:13.751096  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:13.751188  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:13.777717  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:13.777790  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:13.777802  319301 cri.go:96] found id: ""
	I1227 20:10:13.777811  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:13.777878  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:13.781548  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:13.785083  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:13.785193  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:13.811036  319301 cri.go:96] found id: ""
	I1227 20:10:13.811063  319301 logs.go:282] 0 containers: []
	W1227 20:10:13.811072  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:13.811079  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:13.811137  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:13.837822  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:13.837845  319301 cri.go:96] found id: ""
	I1227 20:10:13.837854  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:13.837911  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:13.841739  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:13.841856  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:13.868264  319301 cri.go:96] found id: ""
	I1227 20:10:13.868341  319301 logs.go:282] 0 containers: []
	W1227 20:10:13.868364  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:13.868387  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:13.868471  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:13.894511  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:13.894535  319301 cri.go:96] found id: ""
	I1227 20:10:13.894543  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:13.894621  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:13.898655  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:13.898764  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:13.924022  319301 cri.go:96] found id: ""
	I1227 20:10:13.924047  319301 logs.go:282] 0 containers: []
	W1227 20:10:13.924062  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:13.924077  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:13.924089  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:13.956536  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:13.956567  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:14.057854  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:14.057894  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:14.139219  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:14.129809    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.130897    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.132418    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.132833    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.134384    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:14.129809    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.130897    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.132418    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.132833    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.134384    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:14.139251  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:14.139265  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:14.182716  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:14.182750  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:14.208224  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:14.208301  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:14.225984  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:14.226016  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:14.256249  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:14.256314  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:14.301058  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:14.301201  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:14.329017  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:14.329046  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:16.906959  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:16.917912  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:16.917986  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:16.947235  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:16.947299  319301 cri.go:96] found id: ""
	I1227 20:10:16.947322  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:16.947404  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:16.951076  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:16.951204  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:16.984938  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:16.984962  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:16.984968  319301 cri.go:96] found id: ""
	I1227 20:10:16.984976  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:16.985053  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:16.988800  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:16.992512  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:16.992592  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:17.026764  319301 cri.go:96] found id: ""
	I1227 20:10:17.026789  319301 logs.go:282] 0 containers: []
	W1227 20:10:17.026798  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:17.026804  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:17.026875  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:17.053717  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:17.053741  319301 cri.go:96] found id: ""
	I1227 20:10:17.053749  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:17.053803  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:17.057601  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:17.057691  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:17.088432  319301 cri.go:96] found id: ""
	I1227 20:10:17.088455  319301 logs.go:282] 0 containers: []
	W1227 20:10:17.088464  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:17.088470  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:17.088529  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:17.115961  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:17.115985  319301 cri.go:96] found id: ""
	I1227 20:10:17.115995  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:17.116046  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:17.119890  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:17.119963  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:17.148631  319301 cri.go:96] found id: ""
	I1227 20:10:17.148654  319301 logs.go:282] 0 containers: []
	W1227 20:10:17.148663  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:17.148678  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:17.148694  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:17.240100  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:17.240138  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:17.259693  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:17.259725  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:17.291635  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:17.291666  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:17.368588  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:17.368624  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:17.407623  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:17.407652  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:17.475650  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:17.467352    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.467760    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.469497    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.470032    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.471718    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:17.467352    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.467760    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.469497    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.470032    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.471718    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:17.475719  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:17.475739  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:17.516294  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:17.516328  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:17.559509  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:17.559544  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:17.587296  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:17.587332  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:20.115472  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:20.126778  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:20.126847  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:20.153825  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:20.153850  319301 cri.go:96] found id: ""
	I1227 20:10:20.153859  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:20.153919  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:20.157682  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:20.157759  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:20.189317  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:20.189386  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:20.189420  319301 cri.go:96] found id: ""
	I1227 20:10:20.189493  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:20.189582  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:20.193669  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:20.197374  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:20.197473  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:20.237542  319301 cri.go:96] found id: ""
	I1227 20:10:20.237570  319301 logs.go:282] 0 containers: []
	W1227 20:10:20.237579  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:20.237585  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:20.237643  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:20.274313  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:20.274381  319301 cri.go:96] found id: ""
	I1227 20:10:20.274417  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:20.274509  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:20.279651  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:20.279718  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:20.306525  319301 cri.go:96] found id: ""
	I1227 20:10:20.306586  319301 logs.go:282] 0 containers: []
	W1227 20:10:20.306610  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:20.306636  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:20.306707  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:20.333808  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:20.333829  319301 cri.go:96] found id: ""
	I1227 20:10:20.333837  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:20.333927  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:20.337575  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:20.337677  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:20.372581  319301 cri.go:96] found id: ""
	I1227 20:10:20.372607  319301 logs.go:282] 0 containers: []
	W1227 20:10:20.372621  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:20.372636  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:20.372647  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:20.467758  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:20.467794  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:20.486495  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:20.486527  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:20.553188  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:20.545238    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.545758    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.547330    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.548070    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.549570    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:20.545238    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.545758    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.547330    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.548070    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.549570    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:20.553253  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:20.553282  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:20.580345  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:20.580374  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:20.626310  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:20.626345  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:20.670432  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:20.670467  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:20.696170  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:20.696199  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:20.730948  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:20.730976  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:20.805291  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:20.805325  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:23.351696  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:23.362369  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:23.362478  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:23.391572  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:23.391649  319301 cri.go:96] found id: ""
	I1227 20:10:23.391664  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:23.391739  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:23.395547  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:23.395671  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:23.422118  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:23.422141  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:23.422147  319301 cri.go:96] found id: ""
	I1227 20:10:23.422155  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:23.422235  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:23.426008  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:23.429336  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:23.429411  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:23.459272  319301 cri.go:96] found id: ""
	I1227 20:10:23.459299  319301 logs.go:282] 0 containers: []
	W1227 20:10:23.459308  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:23.459316  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:23.459398  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:23.484648  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:23.484671  319301 cri.go:96] found id: ""
	I1227 20:10:23.484679  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:23.484755  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:23.488422  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:23.488501  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:23.512953  319301 cri.go:96] found id: ""
	I1227 20:10:23.512978  319301 logs.go:282] 0 containers: []
	W1227 20:10:23.512987  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:23.512994  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:23.513049  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:23.538866  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:23.538889  319301 cri.go:96] found id: ""
	I1227 20:10:23.538898  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:23.538952  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:23.542487  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:23.542556  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:23.568959  319301 cri.go:96] found id: ""
	I1227 20:10:23.568985  319301 logs.go:282] 0 containers: []
	W1227 20:10:23.568994  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:23.569010  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:23.569023  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:23.614313  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:23.614346  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:23.639847  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:23.639875  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:23.671907  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:23.671936  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:23.702365  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:23.702394  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:23.783203  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:23.783246  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:23.884915  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:23.884948  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:23.902305  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:23.902337  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:23.970687  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:23.961560    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.962112    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.963576    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.964060    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.965635    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:23.961560    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.962112    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.963576    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.964060    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.965635    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:23.970722  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:23.970735  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:24.004792  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:24.004819  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:26.564703  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:26.575059  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:26.575143  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:26.604294  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:26.604317  319301 cri.go:96] found id: ""
	I1227 20:10:26.604326  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:26.604381  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:26.608875  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:26.608942  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:26.634574  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:26.634595  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:26.634600  319301 cri.go:96] found id: ""
	I1227 20:10:26.634607  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:26.634660  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:26.638317  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:26.641718  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:26.641787  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:26.670771  319301 cri.go:96] found id: ""
	I1227 20:10:26.670793  319301 logs.go:282] 0 containers: []
	W1227 20:10:26.670802  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:26.670808  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:26.670867  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:26.697344  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:26.697376  319301 cri.go:96] found id: ""
	I1227 20:10:26.697386  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:26.697491  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:26.701237  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:26.701344  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:26.726058  319301 cri.go:96] found id: ""
	I1227 20:10:26.726125  319301 logs.go:282] 0 containers: []
	W1227 20:10:26.726140  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:26.726147  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:26.726209  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:26.752574  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:26.752594  319301 cri.go:96] found id: ""
	I1227 20:10:26.752602  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:26.752658  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:26.756386  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:26.756457  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:26.786442  319301 cri.go:96] found id: ""
	I1227 20:10:26.786465  319301 logs.go:282] 0 containers: []
	W1227 20:10:26.786474  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:26.786488  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:26.786500  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:26.814367  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:26.814441  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:26.839989  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:26.840061  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:26.876712  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:26.876796  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:26.918742  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:26.918784  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:26.961668  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:26.961699  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:26.994123  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:26.994151  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:27.085553  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:27.085590  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:27.186397  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:27.186433  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:27.204121  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:27.204153  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:27.273016  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:27.262702    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.263577    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.265227    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.266801    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.267439    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:27.262702    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.263577    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.265227    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.266801    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.267439    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:29.773264  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:29.783744  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:29.783817  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:29.813744  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:29.813806  319301 cri.go:96] found id: ""
	I1227 20:10:29.813829  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:29.813919  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:29.818669  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:29.818786  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:29.844784  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:29.844802  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:29.844806  319301 cri.go:96] found id: ""
	I1227 20:10:29.844814  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:29.844868  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:29.848603  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:29.852078  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:29.852143  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:29.878788  319301 cri.go:96] found id: ""
	I1227 20:10:29.878814  319301 logs.go:282] 0 containers: []
	W1227 20:10:29.878823  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:29.878830  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:29.878890  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:29.908178  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:29.908200  319301 cri.go:96] found id: ""
	I1227 20:10:29.908209  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:29.908264  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:29.911793  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:29.911884  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:29.952724  319301 cri.go:96] found id: ""
	I1227 20:10:29.952749  319301 logs.go:282] 0 containers: []
	W1227 20:10:29.952759  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:29.952765  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:29.952855  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:30.008208  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:30.008289  319301 cri.go:96] found id: ""
	I1227 20:10:30.008312  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:30.008390  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:30.012672  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:30.012766  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:30.063201  319301 cri.go:96] found id: ""
	I1227 20:10:30.063273  319301 logs.go:282] 0 containers: []
	W1227 20:10:30.063297  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:30.063334  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:30.063369  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:30.152059  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:30.152097  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:30.188985  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:30.189011  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:30.288999  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:30.289079  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:30.307734  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:30.307764  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:30.354973  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:30.355008  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:30.425745  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:30.417740    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.418295    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.419807    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.420357    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.421985    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:30.417740    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.418295    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.419807    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.420357    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.421985    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:30.425773  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:30.425789  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:30.454739  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:30.454771  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:30.511002  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:30.511040  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:30.537495  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:30.537526  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:33.065805  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:33.076295  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:33.076418  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:33.103323  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:33.103346  319301 cri.go:96] found id: ""
	I1227 20:10:33.103356  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:33.103410  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:33.107007  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:33.107081  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:33.133167  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:33.133190  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:33.133195  319301 cri.go:96] found id: ""
	I1227 20:10:33.133203  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:33.133264  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:33.137298  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:33.141081  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:33.141152  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:33.167830  319301 cri.go:96] found id: ""
	I1227 20:10:33.167854  319301 logs.go:282] 0 containers: []
	W1227 20:10:33.167862  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:33.167869  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:33.167929  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:33.196531  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:33.196555  319301 cri.go:96] found id: ""
	I1227 20:10:33.196564  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:33.196621  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:33.200165  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:33.200267  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:33.226904  319301 cri.go:96] found id: ""
	I1227 20:10:33.226933  319301 logs.go:282] 0 containers: []
	W1227 20:10:33.226943  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:33.226950  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:33.227009  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:33.254111  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:33.254132  319301 cri.go:96] found id: ""
	I1227 20:10:33.254141  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:33.254197  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:33.258995  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:33.259128  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:33.285296  319301 cri.go:96] found id: ""
	I1227 20:10:33.285320  319301 logs.go:282] 0 containers: []
	W1227 20:10:33.285330  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:33.285350  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:33.285363  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:33.379312  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:33.379349  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:33.397669  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:33.397703  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:33.475423  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:33.464091    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.464710    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.467091    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.469890    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.471637    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:33.464091    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.464710    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.467091    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.469890    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.471637    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:33.475445  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:33.475462  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:33.505362  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:33.505391  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:33.549322  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:33.549353  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:33.592755  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:33.592789  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:33.625076  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:33.625105  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:33.676663  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:33.676692  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:33.703598  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:33.703627  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:36.283392  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:36.293854  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:36.293938  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:36.321425  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:36.321524  319301 cri.go:96] found id: ""
	I1227 20:10:36.321538  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:36.321604  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:36.325322  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:36.325393  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:36.354160  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:36.354182  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:36.354187  319301 cri.go:96] found id: ""
	I1227 20:10:36.354194  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:36.354250  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:36.357942  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:36.361261  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:36.361336  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:36.387328  319301 cri.go:96] found id: ""
	I1227 20:10:36.387356  319301 logs.go:282] 0 containers: []
	W1227 20:10:36.387366  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:36.387373  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:36.387431  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:36.418785  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:36.418807  319301 cri.go:96] found id: ""
	I1227 20:10:36.418815  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:36.418871  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:36.422631  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:36.422709  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:36.452773  319301 cri.go:96] found id: ""
	I1227 20:10:36.452799  319301 logs.go:282] 0 containers: []
	W1227 20:10:36.452807  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:36.452814  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:36.452873  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:36.478409  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:36.478432  319301 cri.go:96] found id: ""
	I1227 20:10:36.478440  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:36.478515  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:36.482226  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:36.482329  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:36.510113  319301 cri.go:96] found id: ""
	I1227 20:10:36.510139  319301 logs.go:282] 0 containers: []
	W1227 20:10:36.510148  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:36.510162  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:36.510206  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:36.528485  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:36.528518  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:36.596104  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:36.586542    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.587371    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.589128    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.589804    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.591834    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:36.586542    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.587371    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.589128    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.589804    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.591834    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:36.596128  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:36.596153  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:36.656568  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:36.656646  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:36.685002  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:36.685040  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:36.719044  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:36.719072  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:36.815628  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:36.815664  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:36.845372  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:36.845407  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:36.892923  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:36.892962  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:36.920168  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:36.920205  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:39.498228  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:39.509127  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:39.509200  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:39.535429  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:39.535450  319301 cri.go:96] found id: ""
	I1227 20:10:39.535458  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:39.535511  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.539036  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:39.539115  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:39.565370  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:39.565395  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:39.565401  319301 cri.go:96] found id: ""
	I1227 20:10:39.565411  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:39.565505  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.569317  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.572838  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:39.572913  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:39.600208  319301 cri.go:96] found id: ""
	I1227 20:10:39.600233  319301 logs.go:282] 0 containers: []
	W1227 20:10:39.600243  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:39.600249  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:39.600359  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:39.627924  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:39.627947  319301 cri.go:96] found id: ""
	I1227 20:10:39.627955  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:39.628038  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.631825  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:39.631929  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:39.670875  319301 cri.go:96] found id: ""
	I1227 20:10:39.670898  319301 logs.go:282] 0 containers: []
	W1227 20:10:39.670907  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:39.670949  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:39.671032  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:39.698935  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:39.698963  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:39.698968  319301 cri.go:96] found id: ""
	I1227 20:10:39.698976  319301 logs.go:282] 2 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:39.699057  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.702755  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.706280  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:39.706367  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:39.732144  319301 cri.go:96] found id: ""
	I1227 20:10:39.732171  319301 logs.go:282] 0 containers: []
	W1227 20:10:39.732192  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:39.732202  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:39.732218  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:39.833062  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:39.833097  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:39.851039  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:39.851169  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:39.936210  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:39.936253  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:40.017614  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:40.018998  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:40.077844  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:40.077881  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:40.191560  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:40.191604  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:40.229430  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:40.229483  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:40.316177  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:40.307077    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.308580    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.309399    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.310789    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.312661    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:40.307077    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.308580    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.309399    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.310789    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.312661    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:40.316202  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:40.316215  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:40.351544  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:40.351584  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:40.379852  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:40.379880  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:42.911718  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:42.922519  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:42.922590  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:42.949680  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:42.949705  319301 cri.go:96] found id: ""
	I1227 20:10:42.949714  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:42.949773  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:42.953773  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:42.953858  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:42.986307  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:42.986333  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:42.986340  319301 cri.go:96] found id: ""
	I1227 20:10:42.986347  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:42.986401  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:42.989939  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:42.993412  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:42.993511  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:43.027198  319301 cri.go:96] found id: ""
	I1227 20:10:43.027224  319301 logs.go:282] 0 containers: []
	W1227 20:10:43.027244  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:43.027251  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:43.027314  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:43.054716  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:43.054739  319301 cri.go:96] found id: ""
	I1227 20:10:43.054748  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:43.054803  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:43.059284  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:43.059357  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:43.093962  319301 cri.go:96] found id: ""
	I1227 20:10:43.093986  319301 logs.go:282] 0 containers: []
	W1227 20:10:43.093995  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:43.094002  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:43.094060  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:43.122219  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:43.122257  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:43.122263  319301 cri.go:96] found id: ""
	I1227 20:10:43.122270  319301 logs.go:282] 2 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:43.122337  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:43.126232  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:43.129862  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:43.129978  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:43.156857  319301 cri.go:96] found id: ""
	I1227 20:10:43.156882  319301 logs.go:282] 0 containers: []
	W1227 20:10:43.156891  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:43.156901  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:43.156914  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:43.174975  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:43.175005  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:43.219964  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:43.220004  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:43.245562  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:43.245591  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:43.276688  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:43.276770  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:43.358338  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:43.358380  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:43.402206  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:43.402234  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:43.499249  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:43.499289  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:43.576572  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:43.568454    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.569067    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.570849    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.571386    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.572871    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:43.568454    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.569067    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.570849    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.571386    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.572871    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:43.576591  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:43.576605  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:43.604599  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:43.604686  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:43.650961  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:43.651038  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:46.181580  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:46.192165  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:46.192233  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:46.218480  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:46.218500  319301 cri.go:96] found id: ""
	I1227 20:10:46.218509  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:46.218563  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.222189  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:46.222263  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:46.253302  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:46.253327  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:46.253332  319301 cri.go:96] found id: ""
	I1227 20:10:46.253340  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:46.253398  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.257309  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.260898  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:46.260974  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:46.289145  319301 cri.go:96] found id: ""
	I1227 20:10:46.289218  319301 logs.go:282] 0 containers: []
	W1227 20:10:46.289241  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:46.289262  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:46.289352  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:46.318927  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:46.318948  319301 cri.go:96] found id: ""
	I1227 20:10:46.318956  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:46.319015  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.322605  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:46.322674  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:46.354035  319301 cri.go:96] found id: ""
	I1227 20:10:46.354061  319301 logs.go:282] 0 containers: []
	W1227 20:10:46.354071  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:46.354077  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:46.354168  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:46.384710  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:46.384734  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:46.384740  319301 cri.go:96] found id: ""
	I1227 20:10:46.384748  319301 logs.go:282] 2 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:46.384803  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.388496  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.392532  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:46.392611  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:46.421588  319301 cri.go:96] found id: ""
	I1227 20:10:46.421664  319301 logs.go:282] 0 containers: []
	W1227 20:10:46.421686  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:46.421709  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:46.421746  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:46.439228  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:46.439330  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:46.484770  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:46.484806  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:46.519247  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:46.519273  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:46.597066  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:46.597101  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:46.634009  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:46.634040  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:46.701472  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:46.693690    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.694466    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.695987    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.696422    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.697877    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:46.693690    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.694466    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.695987    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.696422    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.697877    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:46.701496  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:46.701512  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:46.729296  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:46.729326  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:46.774639  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:46.774678  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:46.799969  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:46.800005  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:46.826163  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:46.826192  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:49.429141  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:49.439610  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:49.439705  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:49.470260  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:49.470283  319301 cri.go:96] found id: ""
	I1227 20:10:49.470292  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:49.470350  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.474256  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:49.474343  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:49.501740  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:49.501762  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:49.501767  319301 cri.go:96] found id: ""
	I1227 20:10:49.501774  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:49.501850  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.505843  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.509390  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:49.509489  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:49.543998  319301 cri.go:96] found id: ""
	I1227 20:10:49.544022  319301 logs.go:282] 0 containers: []
	W1227 20:10:49.544041  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:49.544049  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:49.544107  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:49.570494  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:49.570517  319301 cri.go:96] found id: ""
	I1227 20:10:49.570525  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:49.570581  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.574401  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:49.574471  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:49.603448  319301 cri.go:96] found id: ""
	I1227 20:10:49.603475  319301 logs.go:282] 0 containers: []
	W1227 20:10:49.603486  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:49.603500  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:49.603573  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:49.633356  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:49.633379  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:49.633385  319301 cri.go:96] found id: ""
	I1227 20:10:49.633392  319301 logs.go:282] 2 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:49.633474  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.637216  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.641370  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:49.641472  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:49.669518  319301 cri.go:96] found id: ""
	I1227 20:10:49.669557  319301 logs.go:282] 0 containers: []
	W1227 20:10:49.669567  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:49.669576  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:49.669588  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:49.696361  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:49.696389  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:49.721155  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:49.721184  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:49.753420  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:49.753489  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:49.832989  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:49.833025  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:49.874986  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:49.875013  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:49.978286  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:49.978321  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:49.997322  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:49.997351  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:50.080526  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:50.072015    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.072678    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.074595    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.075259    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.076874    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:50.072015    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.072678    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.074595    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.075259    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.076874    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:50.080546  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:50.080560  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:50.139866  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:50.139902  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:50.184649  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:50.184682  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:52.713968  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:52.726778  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:52.726855  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:52.758017  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:52.758040  319301 cri.go:96] found id: ""
	I1227 20:10:52.758049  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:52.758104  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:52.761780  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:52.761855  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:52.789053  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:52.789076  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:52.789081  319301 cri.go:96] found id: ""
	I1227 20:10:52.789088  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:52.789140  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:52.792812  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:52.796144  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:52.796211  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:52.825853  319301 cri.go:96] found id: ""
	I1227 20:10:52.825883  319301 logs.go:282] 0 containers: []
	W1227 20:10:52.825892  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:52.825898  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:52.825955  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:52.851800  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:52.851820  319301 cri.go:96] found id: ""
	I1227 20:10:52.851828  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:52.851881  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:52.855382  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:52.855455  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:52.885699  319301 cri.go:96] found id: ""
	I1227 20:10:52.885721  319301 logs.go:282] 0 containers: []
	W1227 20:10:52.885736  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:52.885742  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:52.885800  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:52.911251  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:52.911316  319301 cri.go:96] found id: ""
	I1227 20:10:52.911339  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:10:52.911402  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:52.914760  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:52.914841  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:52.939685  319301 cri.go:96] found id: ""
	I1227 20:10:52.939718  319301 logs.go:282] 0 containers: []
	W1227 20:10:52.939728  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:52.939742  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:52.939789  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:53.033951  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:53.033990  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:53.052877  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:53.052906  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:53.096670  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:53.096715  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:53.128695  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:53.128722  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:53.161100  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:53.161130  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:53.227545  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:53.218833    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.219420    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.221028    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.221951    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.223525    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:53.218833    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.219420    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.221028    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.221951    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.223525    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:53.227617  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:53.227640  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:53.255984  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:53.256125  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:53.313035  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:53.313074  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:53.338975  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:53.339057  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:55.915383  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:55.925492  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:55.925565  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:55.952010  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:55.952028  319301 cri.go:96] found id: ""
	I1227 20:10:55.952037  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:55.952092  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:55.955593  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:55.955667  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:55.986538  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:55.986561  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:55.986567  319301 cri.go:96] found id: ""
	I1227 20:10:55.986574  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:55.986628  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:55.990714  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:55.995050  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:55.995121  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:56.024488  319301 cri.go:96] found id: ""
	I1227 20:10:56.024565  319301 logs.go:282] 0 containers: []
	W1227 20:10:56.024588  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:56.024612  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:56.024696  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:56.056966  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:56.057039  319301 cri.go:96] found id: ""
	I1227 20:10:56.057065  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:56.057155  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:56.061997  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:56.062234  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:56.089345  319301 cri.go:96] found id: ""
	I1227 20:10:56.089372  319301 logs.go:282] 0 containers: []
	W1227 20:10:56.089381  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:56.089388  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:56.089488  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:56.117758  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:56.117782  319301 cri.go:96] found id: ""
	I1227 20:10:56.117790  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:10:56.117845  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:56.121319  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:56.121432  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:56.147067  319301 cri.go:96] found id: ""
	I1227 20:10:56.147092  319301 logs.go:282] 0 containers: []
	W1227 20:10:56.147102  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:56.147115  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:56.147130  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:56.224179  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:56.224218  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:56.256694  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:56.256721  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:56.283858  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:56.283889  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:56.353505  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:56.342078    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.343458    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.346135    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.347096    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.347948    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:56.342078    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.343458    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.346135    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.347096    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.347948    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:56.353534  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:56.353548  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:56.399836  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:56.399870  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:56.494637  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:56.494677  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:56.528262  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:56.528292  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:56.577163  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:56.577198  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:56.605916  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:56.605945  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:59.134704  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:59.144988  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:59.145094  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:59.170826  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:59.170846  319301 cri.go:96] found id: ""
	I1227 20:10:59.170859  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:59.170916  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:59.174542  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:59.174618  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:59.204712  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:59.204734  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:59.204738  319301 cri.go:96] found id: ""
	I1227 20:10:59.204746  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:59.204800  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:59.208625  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:59.212119  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:59.212200  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:59.241075  319301 cri.go:96] found id: ""
	I1227 20:10:59.241150  319301 logs.go:282] 0 containers: []
	W1227 20:10:59.241174  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:59.241195  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:59.241312  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:59.277168  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:59.277252  319301 cri.go:96] found id: ""
	I1227 20:10:59.277274  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:59.277366  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:59.281934  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:59.282029  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:59.307601  319301 cri.go:96] found id: ""
	I1227 20:10:59.307627  319301 logs.go:282] 0 containers: []
	W1227 20:10:59.307636  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:59.307643  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:59.307704  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:59.341899  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:59.341923  319301 cri.go:96] found id: ""
	I1227 20:10:59.341931  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:10:59.341999  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:59.345734  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:59.345844  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:59.371593  319301 cri.go:96] found id: ""
	I1227 20:10:59.371661  319301 logs.go:282] 0 containers: []
	W1227 20:10:59.371683  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:59.371716  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:59.371755  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:59.464618  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:59.464654  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:59.483758  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:59.483793  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:59.555654  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:59.546856    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.547308    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.548491    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.548938    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.550344    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:59.546856    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.547308    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.548491    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.548938    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.550344    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:59.555678  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:59.555696  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:59.583971  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:59.584004  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:59.635084  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:59.635118  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:59.662345  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:59.662375  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:59.726915  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:59.726950  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:59.754060  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:59.754094  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:59.836493  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:59.836534  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:02.376222  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:02.386794  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:02.386868  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:02.419031  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:02.419054  319301 cri.go:96] found id: ""
	I1227 20:11:02.419062  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:02.419118  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:02.423033  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:02.423106  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:02.448867  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:02.448891  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:02.448896  319301 cri.go:96] found id: ""
	I1227 20:11:02.448903  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:02.448957  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:02.452561  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:02.455963  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:02.456070  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:02.484254  319301 cri.go:96] found id: ""
	I1227 20:11:02.484281  319301 logs.go:282] 0 containers: []
	W1227 20:11:02.484290  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:02.484297  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:02.484357  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:02.511483  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:02.511506  319301 cri.go:96] found id: ""
	I1227 20:11:02.511515  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:02.511580  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:02.515291  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:02.515364  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:02.542839  319301 cri.go:96] found id: ""
	I1227 20:11:02.542866  319301 logs.go:282] 0 containers: []
	W1227 20:11:02.542886  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:02.542894  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:02.543025  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:02.576471  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:02.576505  319301 cri.go:96] found id: ""
	I1227 20:11:02.576519  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:02.576578  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:02.580126  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:02.580205  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:02.610225  319301 cri.go:96] found id: ""
	I1227 20:11:02.610252  319301 logs.go:282] 0 containers: []
	W1227 20:11:02.610261  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:02.610275  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:02.610316  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:02.640738  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:02.640766  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:02.688087  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:02.688120  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:02.714149  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:02.714175  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:02.743134  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:02.743161  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:02.822169  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:02.822206  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:02.894561  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:02.894595  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:02.936069  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:02.936096  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:03.036539  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:03.036573  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:03.054449  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:03.054480  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:03.132045  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:03.124246    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.125028    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.126504    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.127054    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.128486    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:03.124246    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.125028    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.126504    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.127054    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.128486    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:05.633596  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:05.644441  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:05.644564  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:05.671495  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:05.671520  319301 cri.go:96] found id: ""
	I1227 20:11:05.671528  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:05.671603  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:05.675058  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:05.675148  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:05.699421  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:05.699443  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:05.699448  319301 cri.go:96] found id: ""
	I1227 20:11:05.699456  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:05.699512  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:05.703223  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:05.706661  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:05.706747  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:05.731295  319301 cri.go:96] found id: ""
	I1227 20:11:05.731319  319301 logs.go:282] 0 containers: []
	W1227 20:11:05.731328  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:05.731334  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:05.731409  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:05.758394  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:05.758427  319301 cri.go:96] found id: ""
	I1227 20:11:05.758435  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:05.758500  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:05.762213  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:05.762304  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:05.788439  319301 cri.go:96] found id: ""
	I1227 20:11:05.788465  319301 logs.go:282] 0 containers: []
	W1227 20:11:05.788473  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:05.788480  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:05.788546  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:05.814115  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:05.814137  319301 cri.go:96] found id: ""
	I1227 20:11:05.814145  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:05.814199  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:05.817823  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:05.817893  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:05.844939  319301 cri.go:96] found id: ""
	I1227 20:11:05.844963  319301 logs.go:282] 0 containers: []
	W1227 20:11:05.844973  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:05.844988  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:05.845002  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:05.863023  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:05.863054  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:05.932754  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:05.924777    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.925338    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.926988    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.927561    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.928952    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:05.924777    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.925338    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.926988    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.927561    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.928952    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:05.932785  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:05.932802  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:05.960574  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:05.960604  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:06.004048  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:06.004082  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:06.055406  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:06.055441  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:06.082613  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:06.082643  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:06.115617  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:06.115646  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:06.149699  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:06.149729  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:06.250917  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:06.250950  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:08.830917  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:08.841316  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:08.841404  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:08.871386  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:08.871407  319301 cri.go:96] found id: ""
	I1227 20:11:08.871415  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:08.871483  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:08.875249  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:08.875334  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:08.905155  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:08.905178  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:08.905182  319301 cri.go:96] found id: ""
	I1227 20:11:08.905189  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:08.905256  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:08.909157  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:08.912623  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:08.912696  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:08.940125  319301 cri.go:96] found id: ""
	I1227 20:11:08.940151  319301 logs.go:282] 0 containers: []
	W1227 20:11:08.940161  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:08.940168  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:08.940228  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:08.979078  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:08.979099  319301 cri.go:96] found id: ""
	I1227 20:11:08.979115  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:08.979172  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:08.982993  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:08.983079  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:09.010456  319301 cri.go:96] found id: ""
	I1227 20:11:09.010482  319301 logs.go:282] 0 containers: []
	W1227 20:11:09.010491  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:09.010498  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:09.010559  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:09.046193  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:09.046226  319301 cri.go:96] found id: ""
	I1227 20:11:09.046235  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:09.046293  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:09.050361  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:09.050429  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:09.076865  319301 cri.go:96] found id: ""
	I1227 20:11:09.076892  319301 logs.go:282] 0 containers: []
	W1227 20:11:09.076901  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:09.076917  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:09.076929  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:09.103766  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:09.103793  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:09.121384  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:09.121412  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:09.190959  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:09.182712    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.183470    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.185037    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.185570    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.187248    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:09.182712    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.183470    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.185037    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.185570    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.187248    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:09.191026  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:09.191058  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:09.238609  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:09.238648  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:09.332804  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:09.332844  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:09.374845  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:09.374874  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:09.475731  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:09.475770  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:09.505046  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:09.505075  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:09.550742  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:09.550779  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:12.077490  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:12.089114  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:12.089187  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:12.117965  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:12.117987  319301 cri.go:96] found id: ""
	I1227 20:11:12.117995  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:12.118048  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:12.121654  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:12.121727  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:12.150616  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:12.150645  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:12.150650  319301 cri.go:96] found id: ""
	I1227 20:11:12.150658  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:12.150714  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:12.154526  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:12.157975  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:12.158059  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:12.188379  319301 cri.go:96] found id: ""
	I1227 20:11:12.188406  319301 logs.go:282] 0 containers: []
	W1227 20:11:12.188415  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:12.188421  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:12.188479  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:12.214099  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:12.214125  319301 cri.go:96] found id: ""
	I1227 20:11:12.214134  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:12.214187  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:12.217805  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:12.217871  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:12.244974  319301 cri.go:96] found id: ""
	I1227 20:11:12.244999  319301 logs.go:282] 0 containers: []
	W1227 20:11:12.245008  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:12.245015  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:12.245071  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:12.281031  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:12.281071  319301 cri.go:96] found id: ""
	I1227 20:11:12.281079  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:12.281146  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:12.284926  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:12.285004  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:12.311055  319301 cri.go:96] found id: ""
	I1227 20:11:12.311079  319301 logs.go:282] 0 containers: []
	W1227 20:11:12.311088  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:12.311101  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:12.311113  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:12.330032  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:12.330065  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:12.359973  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:12.360000  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:12.405129  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:12.405163  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:12.460783  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:12.460817  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:12.488201  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:12.488230  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:12.565465  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:12.565502  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:12.662969  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:12.663007  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:12.735836  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:12.727495    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.728366    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.730010    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.730324    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.731834    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:12.727495    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.728366    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.730010    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.730324    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.731834    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:12.735859  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:12.735872  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:12.763143  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:12.763168  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:15.305823  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:15.318015  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:15.318113  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:15.347994  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:15.348017  319301 cri.go:96] found id: ""
	I1227 20:11:15.348026  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:15.348089  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:15.351955  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:15.352056  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:15.378004  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:15.378026  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:15.378031  319301 cri.go:96] found id: ""
	I1227 20:11:15.378038  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:15.378091  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:15.381599  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:15.384824  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:15.384889  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:15.409597  319301 cri.go:96] found id: ""
	I1227 20:11:15.409673  319301 logs.go:282] 0 containers: []
	W1227 20:11:15.409695  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:15.409716  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:15.409805  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:15.436026  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:15.436091  319301 cri.go:96] found id: ""
	I1227 20:11:15.436114  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:15.436205  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:15.439709  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:15.439776  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:15.472950  319301 cri.go:96] found id: ""
	I1227 20:11:15.472974  319301 logs.go:282] 0 containers: []
	W1227 20:11:15.472983  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:15.472990  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:15.473047  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:15.503060  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:15.503083  319301 cri.go:96] found id: ""
	I1227 20:11:15.503092  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:15.503166  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:15.506772  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:15.506841  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:15.531805  319301 cri.go:96] found id: ""
	I1227 20:11:15.531828  319301 logs.go:282] 0 containers: []
	W1227 20:11:15.531837  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:15.531849  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:15.531861  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:15.557217  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:15.557253  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:15.583522  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:15.583550  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:15.646957  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:15.646994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:15.677573  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:15.677601  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:15.763080  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:15.763117  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:15.795445  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:15.795473  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:15.895027  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:15.895063  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:15.914036  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:15.914065  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:15.990029  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:15.981434    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.982226    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.983747    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.984333    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.986074    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:15.981434    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.982226    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.983747    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.984333    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.986074    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:15.990048  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:15.990061  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:18.535347  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:18.545638  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:18.545712  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:18.573096  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:18.573125  319301 cri.go:96] found id: ""
	I1227 20:11:18.573135  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:18.573190  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:18.577413  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:18.577512  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:18.604633  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:18.604657  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:18.604662  319301 cri.go:96] found id: ""
	I1227 20:11:18.604670  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:18.604724  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:18.610098  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:18.613744  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:18.613821  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:18.645090  319301 cri.go:96] found id: ""
	I1227 20:11:18.645116  319301 logs.go:282] 0 containers: []
	W1227 20:11:18.645126  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:18.645132  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:18.645191  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:18.671681  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:18.671705  319301 cri.go:96] found id: ""
	I1227 20:11:18.671713  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:18.671768  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:18.675284  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:18.675356  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:18.701086  319301 cri.go:96] found id: ""
	I1227 20:11:18.701109  319301 logs.go:282] 0 containers: []
	W1227 20:11:18.701117  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:18.701123  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:18.701183  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:18.733157  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:18.733176  319301 cri.go:96] found id: ""
	I1227 20:11:18.733185  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:18.733237  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:18.736898  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:18.736978  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:18.761319  319301 cri.go:96] found id: ""
	I1227 20:11:18.761340  319301 logs.go:282] 0 containers: []
	W1227 20:11:18.761349  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:18.761362  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:18.761374  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:18.793077  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:18.793104  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:18.819425  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:18.819453  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:18.859846  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:18.859919  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:18.938269  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:18.938303  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:19.040817  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:19.040856  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:19.059170  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:19.059202  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:19.132074  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:19.121248    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.122916    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.123583    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.125207    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.125782    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:19.121248    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.122916    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.123583    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.125207    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.125782    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:19.132096  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:19.132111  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:19.179880  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:19.179916  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:19.223928  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:19.223963  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:21.759181  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:21.769762  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:21.769833  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:21.800302  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:21.800323  319301 cri.go:96] found id: ""
	I1227 20:11:21.800332  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:21.800395  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:21.804375  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:21.804458  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:21.830687  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:21.830711  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:21.830717  319301 cri.go:96] found id: ""
	I1227 20:11:21.830724  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:21.830779  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:21.834661  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:21.838097  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:21.838198  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:21.864157  319301 cri.go:96] found id: ""
	I1227 20:11:21.864183  319301 logs.go:282] 0 containers: []
	W1227 20:11:21.864193  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:21.864199  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:21.864292  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:21.890722  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:21.890747  319301 cri.go:96] found id: ""
	I1227 20:11:21.890756  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:21.890812  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:21.894377  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:21.894447  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:21.921902  319301 cri.go:96] found id: ""
	I1227 20:11:21.921932  319301 logs.go:282] 0 containers: []
	W1227 20:11:21.921941  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:21.921948  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:21.922013  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:21.948157  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:21.948181  319301 cri.go:96] found id: ""
	I1227 20:11:21.948190  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:21.948246  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:21.951860  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:21.951928  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:21.979147  319301 cri.go:96] found id: ""
	I1227 20:11:21.979171  319301 logs.go:282] 0 containers: []
	W1227 20:11:21.979181  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:21.979222  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:21.979242  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:22.077716  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:22.077768  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:22.161527  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:22.149113    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.149745    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.154386    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.154984    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.157780    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:22.149113    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.149745    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.154386    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.154984    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.157780    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:22.161553  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:22.161566  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:22.193359  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:22.193386  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:22.247574  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:22.247611  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:22.302993  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:22.303034  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:22.332035  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:22.332064  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:22.358225  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:22.358265  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:22.437089  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:22.437124  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:22.455750  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:22.455781  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:24.990837  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:25.001120  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:25.001190  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:25.040369  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:25.040388  319301 cri.go:96] found id: ""
	I1227 20:11:25.040396  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:25.040452  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:25.044321  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:25.044388  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:25.075240  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:25.075264  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:25.075268  319301 cri.go:96] found id: ""
	I1227 20:11:25.075276  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:25.075331  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:25.079221  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:25.083046  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:25.083117  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:25.111437  319301 cri.go:96] found id: ""
	I1227 20:11:25.111466  319301 logs.go:282] 0 containers: []
	W1227 20:11:25.111475  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:25.111482  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:25.111540  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:25.139474  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:25.139498  319301 cri.go:96] found id: ""
	I1227 20:11:25.139507  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:25.139572  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:25.143469  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:25.143540  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:25.177080  319301 cri.go:96] found id: ""
	I1227 20:11:25.177103  319301 logs.go:282] 0 containers: []
	W1227 20:11:25.177112  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:25.177119  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:25.177235  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:25.204123  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:25.204146  319301 cri.go:96] found id: ""
	I1227 20:11:25.204155  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:25.204238  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:25.207906  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:25.207978  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:25.233127  319301 cri.go:96] found id: ""
	I1227 20:11:25.233150  319301 logs.go:282] 0 containers: []
	W1227 20:11:25.233160  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:25.233175  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:25.233187  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:25.252764  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:25.252793  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:25.302886  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:25.302924  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:25.327231  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:25.327259  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:25.357720  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:25.357749  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:25.396486  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:25.396513  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:25.469872  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:25.461875    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.462332    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.464006    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.464571    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.466153    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:25.461875    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.462332    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.464006    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.464571    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.466153    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:25.469894  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:25.469907  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:25.498176  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:25.498204  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:25.547245  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:25.547279  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:25.629600  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:25.629639  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:28.230549  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:28.241564  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:28.241641  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:28.279080  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:28.279110  319301 cri.go:96] found id: ""
	I1227 20:11:28.279119  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:28.279185  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:28.284314  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:28.284405  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:28.316322  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:28.316389  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:28.316408  319301 cri.go:96] found id: ""
	I1227 20:11:28.316436  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:28.316522  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:28.320358  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:28.323910  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:28.324004  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:28.354101  319301 cri.go:96] found id: ""
	I1227 20:11:28.354172  319301 logs.go:282] 0 containers: []
	W1227 20:11:28.354195  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:28.354221  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:28.354308  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:28.381894  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:28.381933  319301 cri.go:96] found id: ""
	I1227 20:11:28.381944  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:28.382007  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:28.385565  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:28.385640  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:28.412036  319301 cri.go:96] found id: ""
	I1227 20:11:28.412063  319301 logs.go:282] 0 containers: []
	W1227 20:11:28.412072  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:28.412079  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:28.412136  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:28.437133  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:28.437154  319301 cri.go:96] found id: ""
	I1227 20:11:28.437162  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:28.437216  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:28.440922  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:28.441006  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:28.469470  319301 cri.go:96] found id: ""
	I1227 20:11:28.469495  319301 logs.go:282] 0 containers: []
	W1227 20:11:28.469505  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:28.469518  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:28.469531  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:28.512248  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:28.512281  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:28.538806  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:28.538834  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:28.615719  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:28.615756  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:28.651963  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:28.651992  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:28.753577  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:28.753616  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:28.770745  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:28.770778  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:28.798843  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:28.798878  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:28.867106  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:28.858730    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.859584    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.861103    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.861408    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.863356    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:28.858730    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.859584    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.861103    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.861408    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.863356    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:28.867124  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:28.867137  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:28.897868  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:28.897897  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:31.455673  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:31.466341  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:31.466412  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:31.494286  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:31.494305  319301 cri.go:96] found id: ""
	I1227 20:11:31.494312  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:31.494368  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:31.499152  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:31.499229  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:31.525626  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:31.525647  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:31.525651  319301 cri.go:96] found id: ""
	I1227 20:11:31.525666  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:31.525721  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:31.529291  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:31.532543  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:31.532612  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:31.558153  319301 cri.go:96] found id: ""
	I1227 20:11:31.558178  319301 logs.go:282] 0 containers: []
	W1227 20:11:31.558187  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:31.558193  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:31.558274  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:31.585024  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:31.585047  319301 cri.go:96] found id: ""
	I1227 20:11:31.585055  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:31.585109  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:31.588772  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:31.588841  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:31.615373  319301 cri.go:96] found id: ""
	I1227 20:11:31.615398  319301 logs.go:282] 0 containers: []
	W1227 20:11:31.615408  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:31.615414  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:31.615474  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:31.644548  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:31.644571  319301 cri.go:96] found id: ""
	I1227 20:11:31.644579  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:31.644634  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:31.648326  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:31.648396  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:31.674106  319301 cri.go:96] found id: ""
	I1227 20:11:31.674128  319301 logs.go:282] 0 containers: []
	W1227 20:11:31.674137  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:31.674152  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:31.674165  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:31.769885  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:31.769924  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:31.787798  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:31.787829  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:31.840240  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:31.840276  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:31.883880  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:31.883914  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:31.912615  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:31.912645  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:31.993762  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:31.993796  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:32.038771  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:32.038807  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:32.113504  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:32.105141    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.106007    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.106783    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.108406    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.108703    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:32.105141    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.106007    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.106783    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.108406    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.108703    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:32.113531  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:32.113545  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:32.145482  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:32.145508  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:34.675972  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:34.687181  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:34.687251  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:34.713741  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:34.713768  319301 cri.go:96] found id: ""
	I1227 20:11:34.713776  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:34.713837  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:34.717422  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:34.717525  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:34.742801  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:34.742824  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:34.742829  319301 cri.go:96] found id: ""
	I1227 20:11:34.742836  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:34.742890  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:34.746901  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:34.750347  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:34.750438  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:34.776122  319301 cri.go:96] found id: ""
	I1227 20:11:34.776156  319301 logs.go:282] 0 containers: []
	W1227 20:11:34.776165  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:34.776173  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:34.776241  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:34.801663  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:34.801687  319301 cri.go:96] found id: ""
	I1227 20:11:34.801696  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:34.801752  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:34.805521  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:34.805600  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:34.839033  319301 cri.go:96] found id: ""
	I1227 20:11:34.839059  319301 logs.go:282] 0 containers: []
	W1227 20:11:34.839068  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:34.839075  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:34.839164  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:34.875359  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:34.875380  319301 cri.go:96] found id: ""
	I1227 20:11:34.875389  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:34.875444  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:34.879108  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:34.879203  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:34.904808  319301 cri.go:96] found id: ""
	I1227 20:11:34.904831  319301 logs.go:282] 0 containers: []
	W1227 20:11:34.904839  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:34.904882  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:34.904902  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:35.001157  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:35.001197  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:35.036396  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:35.036492  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:35.100412  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:35.100452  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:35.130486  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:35.130514  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:35.212133  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:35.212170  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:35.261425  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:35.261489  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:35.279972  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:35.280002  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:35.344789  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:35.336875    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.337423    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.338974    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.339514    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.340959    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:35.336875    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.337423    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.338974    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.339514    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.340959    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:35.344811  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:35.344826  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:35.388398  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:35.388438  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:37.916139  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:37.926579  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:37.926656  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:37.957965  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:37.957990  319301 cri.go:96] found id: ""
	I1227 20:11:37.958011  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:37.958064  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:37.961819  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:37.961939  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:37.990732  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:37.990756  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:37.990763  319301 cri.go:96] found id: ""
	I1227 20:11:37.990774  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:37.990832  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:37.994865  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:37.998563  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:37.998657  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:38.029180  319301 cri.go:96] found id: ""
	I1227 20:11:38.029206  319301 logs.go:282] 0 containers: []
	W1227 20:11:38.029228  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:38.029235  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:38.029302  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:38.058262  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:38.058287  319301 cri.go:96] found id: ""
	I1227 20:11:38.058295  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:38.058390  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:38.062798  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:38.062895  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:38.093594  319301 cri.go:96] found id: ""
	I1227 20:11:38.093630  319301 logs.go:282] 0 containers: []
	W1227 20:11:38.093641  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:38.093647  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:38.093723  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:38.122677  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:38.122700  319301 cri.go:96] found id: ""
	I1227 20:11:38.122710  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:38.122784  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:38.126481  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:38.126556  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:38.152399  319301 cri.go:96] found id: ""
	I1227 20:11:38.152425  319301 logs.go:282] 0 containers: []
	W1227 20:11:38.152434  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:38.152447  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:38.152459  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:38.169834  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:38.169865  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:38.236553  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:38.228832    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.229398    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.230976    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.231455    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.232939    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:38.228832    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.229398    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.230976    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.231455    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.232939    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:38.236574  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:38.236587  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:38.283907  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:38.283942  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:38.327559  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:38.327595  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:38.354915  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:38.354944  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:38.385535  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:38.385567  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:38.482920  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:38.482955  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:38.513709  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:38.513737  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:38.541063  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:38.541092  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:41.120061  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:41.130482  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:41.130560  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:41.157933  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:41.157995  319301 cri.go:96] found id: ""
	I1227 20:11:41.158011  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:41.158068  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:41.161515  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:41.161587  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:41.186761  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:41.186784  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:41.186789  319301 cri.go:96] found id: ""
	I1227 20:11:41.186796  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:41.186853  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:41.190548  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:41.194929  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:41.195019  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:41.225573  319301 cri.go:96] found id: ""
	I1227 20:11:41.225600  319301 logs.go:282] 0 containers: []
	W1227 20:11:41.225609  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:41.225615  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:41.225678  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:41.255736  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:41.255810  319301 cri.go:96] found id: ""
	I1227 20:11:41.255833  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:41.255924  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:41.259619  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:41.259730  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:41.293635  319301 cri.go:96] found id: ""
	I1227 20:11:41.293658  319301 logs.go:282] 0 containers: []
	W1227 20:11:41.293667  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:41.293674  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:41.293736  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:41.325226  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:41.325248  319301 cri.go:96] found id: ""
	I1227 20:11:41.325257  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:41.325311  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:41.328850  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:41.328919  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:41.356320  319301 cri.go:96] found id: ""
	I1227 20:11:41.356345  319301 logs.go:282] 0 containers: []
	W1227 20:11:41.356354  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:41.356370  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:41.356383  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:41.384750  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:41.384777  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:41.438279  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:41.438315  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:41.496771  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:41.496814  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:41.525343  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:41.525373  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:41.558207  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:41.558235  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:41.657075  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:41.657112  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:41.689798  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:41.689828  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:41.769585  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:41.769620  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:41.787874  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:41.787906  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:41.852555  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:41.844441    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.845015    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.846678    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.847233    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.849010    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:41.844441    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.845015    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.846678    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.847233    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.849010    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:44.353586  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:44.364496  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:44.364591  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:44.396750  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:44.396823  319301 cri.go:96] found id: ""
	I1227 20:11:44.396848  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:44.396920  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:44.400610  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:44.400687  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:44.428171  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:44.428250  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:44.428271  319301 cri.go:96] found id: ""
	I1227 20:11:44.428296  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:44.428411  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:44.432219  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:44.435828  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:44.435901  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:44.464904  319301 cri.go:96] found id: ""
	I1227 20:11:44.464931  319301 logs.go:282] 0 containers: []
	W1227 20:11:44.464953  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:44.464960  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:44.465019  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:44.494508  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:44.494537  319301 cri.go:96] found id: ""
	I1227 20:11:44.494546  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:44.494602  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:44.498485  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:44.498588  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:44.526221  319301 cri.go:96] found id: ""
	I1227 20:11:44.526249  319301 logs.go:282] 0 containers: []
	W1227 20:11:44.526258  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:44.526264  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:44.526337  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:44.557553  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:44.557629  319301 cri.go:96] found id: ""
	I1227 20:11:44.557644  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:44.557713  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:44.561435  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:44.561578  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:44.588202  319301 cri.go:96] found id: ""
	I1227 20:11:44.588227  319301 logs.go:282] 0 containers: []
	W1227 20:11:44.588236  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:44.588250  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:44.588281  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:44.636647  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:44.636688  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:44.715003  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:44.715041  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:44.746461  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:44.746488  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:44.840354  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:44.840392  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:44.910107  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:44.902375    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.902947    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.904566    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.905162    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.906700    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:44.902375    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.902947    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.904566    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.905162    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.906700    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:44.910127  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:44.910139  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:44.958123  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:44.958155  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:44.988455  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:44.988486  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:45.017637  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:45.017669  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:45.068015  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:45.068047  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:47.639577  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:47.650807  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:47.650879  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:47.680709  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:47.680780  319301 cri.go:96] found id: ""
	I1227 20:11:47.680801  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:47.680886  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:47.684862  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:47.684933  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:47.711503  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:47.711527  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:47.711533  319301 cri.go:96] found id: ""
	I1227 20:11:47.711541  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:47.711597  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:47.715323  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:47.718860  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:47.718939  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:47.745091  319301 cri.go:96] found id: ""
	I1227 20:11:47.745118  319301 logs.go:282] 0 containers: []
	W1227 20:11:47.745128  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:47.745134  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:47.745190  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:47.774661  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:47.774683  319301 cri.go:96] found id: ""
	I1227 20:11:47.774691  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:47.774751  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:47.778781  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:47.778879  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:47.805242  319301 cri.go:96] found id: ""
	I1227 20:11:47.805268  319301 logs.go:282] 0 containers: []
	W1227 20:11:47.805278  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:47.805284  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:47.805350  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:47.833172  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:47.833240  319301 cri.go:96] found id: ""
	I1227 20:11:47.833262  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:47.833351  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:47.837087  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:47.837159  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:47.865275  319301 cri.go:96] found id: ""
	I1227 20:11:47.865353  319301 logs.go:282] 0 containers: []
	W1227 20:11:47.865380  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:47.865432  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:47.865505  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:47.944986  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:47.945022  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:47.980482  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:47.980511  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:47.999608  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:47.999639  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:48.076328  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:48.067348    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.068343    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.070039    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.070763    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.072273    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:48.067348    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.068343    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.070039    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.070763    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.072273    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:48.076352  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:48.076365  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:48.102940  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:48.102968  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:48.195452  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:48.195490  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:48.225373  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:48.225402  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:48.273525  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:48.273604  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:48.325768  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:48.325805  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:50.855952  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:50.867387  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:50.867456  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:50.897533  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:50.897556  319301 cri.go:96] found id: ""
	I1227 20:11:50.897565  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:50.897617  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:50.900982  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:50.901048  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:50.935428  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:50.935450  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:50.935455  319301 cri.go:96] found id: ""
	I1227 20:11:50.935468  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:50.935521  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:50.939266  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:50.943149  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:50.943266  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:50.974808  319301 cri.go:96] found id: ""
	I1227 20:11:50.974842  319301 logs.go:282] 0 containers: []
	W1227 20:11:50.974852  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:50.974859  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:50.974928  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:51.001867  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:51.001890  319301 cri.go:96] found id: ""
	I1227 20:11:51.001899  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:51.001957  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:51.005758  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:51.005831  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:51.035904  319301 cri.go:96] found id: ""
	I1227 20:11:51.035979  319301 logs.go:282] 0 containers: []
	W1227 20:11:51.036002  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:51.036026  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:51.036134  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:51.064190  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:51.064213  319301 cri.go:96] found id: ""
	I1227 20:11:51.064222  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:51.064277  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:51.068971  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:51.069043  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:51.098066  319301 cri.go:96] found id: ""
	I1227 20:11:51.098092  319301 logs.go:282] 0 containers: []
	W1227 20:11:51.098101  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:51.098116  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:51.098128  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:51.193690  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:51.193731  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:51.236544  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:51.236578  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:51.275361  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:51.275397  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:51.309801  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:51.309827  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:51.327683  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:51.327711  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:51.401236  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:51.392227    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.393287    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.394285    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.395538    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.396222    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:51.392227    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.393287    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.394285    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.395538    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.396222    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:51.401259  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:51.401273  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:51.429955  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:51.429985  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:51.492625  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:51.492662  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:51.518481  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:51.518512  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:54.100065  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:54.111435  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:54.111510  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:54.142927  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:54.142956  319301 cri.go:96] found id: ""
	I1227 20:11:54.142975  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:54.143064  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:54.147093  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:54.147233  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:54.173813  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:54.173832  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:54.173837  319301 cri.go:96] found id: ""
	I1227 20:11:54.173844  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:54.173903  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:54.177570  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:54.181008  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:54.181079  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:54.206624  319301 cri.go:96] found id: ""
	I1227 20:11:54.206648  319301 logs.go:282] 0 containers: []
	W1227 20:11:54.206658  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:54.206664  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:54.206720  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:54.232185  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:54.232208  319301 cri.go:96] found id: ""
	I1227 20:11:54.232218  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:54.232281  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:54.236968  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:54.237047  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:54.266150  319301 cri.go:96] found id: ""
	I1227 20:11:54.266172  319301 logs.go:282] 0 containers: []
	W1227 20:11:54.266181  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:54.266187  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:54.266254  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:54.294800  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:54.294820  319301 cri.go:96] found id: ""
	I1227 20:11:54.294829  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:54.294880  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:54.298462  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:54.298526  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:54.323550  319301 cri.go:96] found id: ""
	I1227 20:11:54.323573  319301 logs.go:282] 0 containers: []
	W1227 20:11:54.323582  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:54.323599  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:54.323610  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:54.352757  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:54.352783  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:54.383438  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:54.383464  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:54.473431  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:54.473470  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:54.544121  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:54.535951    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.536753    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.538081    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.538522    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.540194    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:54.535951    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.536753    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.538081    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.538522    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.540194    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:54.544146  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:54.544162  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:54.587199  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:54.587231  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:54.625648  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:54.625675  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:54.708479  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:54.708513  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:54.727026  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:54.727055  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:54.758081  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:54.758110  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:57.311000  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:57.321234  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:57.321311  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:57.349011  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:57.349030  319301 cri.go:96] found id: ""
	I1227 20:11:57.349038  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:57.349091  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:57.353198  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:57.353266  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:57.378464  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:57.378489  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:57.378494  319301 cri.go:96] found id: ""
	I1227 20:11:57.378502  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:57.378564  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:57.382492  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:57.385894  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:57.385975  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:57.410564  319301 cri.go:96] found id: ""
	I1227 20:11:57.410629  319301 logs.go:282] 0 containers: []
	W1227 20:11:57.410642  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:57.410650  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:57.410708  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:57.437790  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:57.437814  319301 cri.go:96] found id: ""
	I1227 20:11:57.437823  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:57.437881  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:57.441526  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:57.441645  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:57.467252  319301 cri.go:96] found id: ""
	I1227 20:11:57.467319  319301 logs.go:282] 0 containers: []
	W1227 20:11:57.467334  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:57.467342  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:57.467406  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:57.495037  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:57.495058  319301 cri.go:96] found id: ""
	I1227 20:11:57.495067  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:57.495123  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:57.498778  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:57.498878  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:57.528106  319301 cri.go:96] found id: ""
	I1227 20:11:57.528133  319301 logs.go:282] 0 containers: []
	W1227 20:11:57.528142  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:57.528155  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:57.528168  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:57.619388  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:57.619424  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:57.650304  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:57.650332  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:57.699631  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:57.699667  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:57.743221  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:57.743254  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:57.769136  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:57.769164  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:57.786763  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:57.786790  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:57.859691  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:57.849669    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.850063    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.853911    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.854484    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.856001    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:57.849669    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.850063    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.853911    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.854484    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.856001    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:57.859713  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:57.859728  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:57.884558  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:57.884586  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:57.961115  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:57.961152  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:00.497672  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:00.510050  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:00.510129  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:00.544933  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:00.544956  319301 cri.go:96] found id: ""
	I1227 20:12:00.544965  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:00.545025  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:00.549158  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:00.549233  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:00.576607  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:00.576630  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:00.576636  319301 cri.go:96] found id: ""
	I1227 20:12:00.576643  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:00.576700  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:00.580716  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:00.584708  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:00.584783  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:00.623469  319301 cri.go:96] found id: ""
	I1227 20:12:00.623492  319301 logs.go:282] 0 containers: []
	W1227 20:12:00.623501  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:00.623508  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:00.623567  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:00.650388  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:00.650460  319301 cri.go:96] found id: ""
	I1227 20:12:00.650476  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:00.650537  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:00.654531  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:00.654613  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:00.685179  319301 cri.go:96] found id: ""
	I1227 20:12:00.685206  319301 logs.go:282] 0 containers: []
	W1227 20:12:00.685215  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:00.685222  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:00.685283  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:00.716017  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:00.716036  319301 cri.go:96] found id: ""
	I1227 20:12:00.716045  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:00.716102  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:00.720897  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:00.720967  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:00.752084  319301 cri.go:96] found id: ""
	I1227 20:12:00.752108  319301 logs.go:282] 0 containers: []
	W1227 20:12:00.752118  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:00.752133  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:00.752145  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:00.779162  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:00.779191  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:00.828229  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:00.828268  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:00.854975  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:00.855005  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:00.883576  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:00.883606  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:00.965151  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:00.965192  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:01.067209  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:01.067248  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:01.085199  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:01.085232  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:01.155625  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:01.146876    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.148053    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.148721    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.149832    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.150397    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:01.146876    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.148053    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.148721    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.149832    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.150397    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:01.155647  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:01.155660  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:01.206940  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:01.206978  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:03.749679  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:03.760472  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:03.760548  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:03.788993  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:03.789016  319301 cri.go:96] found id: ""
	I1227 20:12:03.789024  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:03.789079  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:03.792725  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:03.792798  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:03.817942  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:03.817964  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:03.817969  319301 cri.go:96] found id: ""
	I1227 20:12:03.817975  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:03.818031  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:03.821717  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:03.825168  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:03.825254  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:03.851505  319301 cri.go:96] found id: ""
	I1227 20:12:03.851527  319301 logs.go:282] 0 containers: []
	W1227 20:12:03.851536  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:03.851542  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:03.851606  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:03.878946  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:03.878971  319301 cri.go:96] found id: ""
	I1227 20:12:03.878980  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:03.879043  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:03.883057  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:03.883130  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:03.911906  319301 cri.go:96] found id: ""
	I1227 20:12:03.911933  319301 logs.go:282] 0 containers: []
	W1227 20:12:03.911943  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:03.911950  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:03.912009  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:03.942160  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:03.942183  319301 cri.go:96] found id: ""
	I1227 20:12:03.942192  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:03.942252  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:03.946415  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:03.946666  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:03.979149  319301 cri.go:96] found id: ""
	I1227 20:12:03.979174  319301 logs.go:282] 0 containers: []
	W1227 20:12:03.979182  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:03.979198  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:03.979210  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:04.005778  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:04.005811  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:04.088126  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:04.088160  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:04.119438  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:04.119469  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:04.190373  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:04.181899    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.182747    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.184416    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.184965    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.186575    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:04.181899    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.182747    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.184416    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.184965    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.186575    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:04.190394  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:04.190407  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:04.220233  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:04.220259  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:04.245645  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:04.245671  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:04.345961  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:04.345994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:04.365659  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:04.365694  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:04.417757  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:04.417791  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:06.964717  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:06.979395  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:06.979502  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:07.006920  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:07.006954  319301 cri.go:96] found id: ""
	I1227 20:12:07.006964  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:07.007030  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:07.012095  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:07.012233  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:07.041413  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:07.041494  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:07.041512  319301 cri.go:96] found id: ""
	I1227 20:12:07.041520  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:07.041598  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:07.045354  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:07.049177  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:07.049259  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:07.083301  319301 cri.go:96] found id: ""
	I1227 20:12:07.083329  319301 logs.go:282] 0 containers: []
	W1227 20:12:07.083338  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:07.083344  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:07.083421  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:07.115313  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:07.115338  319301 cri.go:96] found id: ""
	I1227 20:12:07.115347  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:07.115417  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:07.119201  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:07.119288  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:07.146102  319301 cri.go:96] found id: ""
	I1227 20:12:07.146131  319301 logs.go:282] 0 containers: []
	W1227 20:12:07.146140  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:07.146147  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:07.146208  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:07.172141  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:07.172172  319301 cri.go:96] found id: ""
	I1227 20:12:07.172180  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:07.172247  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:07.175941  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:07.176014  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:07.201635  319301 cri.go:96] found id: ""
	I1227 20:12:07.201661  319301 logs.go:282] 0 containers: []
	W1227 20:12:07.201682  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:07.201699  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:07.201711  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:07.267041  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:07.258167    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.258717    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.260273    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.260745    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.262196    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:07.258167    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.258717    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.260273    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.260745    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.262196    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:07.267062  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:07.267076  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:07.299653  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:07.299681  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:07.379741  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:07.379776  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:07.478201  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:07.478238  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:07.496143  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:07.496172  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:07.524943  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:07.524973  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:07.588841  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:07.588883  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:07.639348  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:07.639391  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:07.671575  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:07.671608  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:10.217505  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:10.228493  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:10.228562  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:10.262225  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:10.262248  319301 cri.go:96] found id: ""
	I1227 20:12:10.262256  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:10.262312  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:10.267062  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:10.267197  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:10.296434  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:10.296459  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:10.296464  319301 cri.go:96] found id: ""
	I1227 20:12:10.296472  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:10.296529  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:10.300310  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:10.304957  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:10.305022  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:10.330532  319301 cri.go:96] found id: ""
	I1227 20:12:10.330560  319301 logs.go:282] 0 containers: []
	W1227 20:12:10.330570  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:10.330584  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:10.330646  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:10.361300  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:10.361324  319301 cri.go:96] found id: ""
	I1227 20:12:10.361332  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:10.361394  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:10.365025  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:10.365095  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:10.391129  319301 cri.go:96] found id: ""
	I1227 20:12:10.391150  319301 logs.go:282] 0 containers: []
	W1227 20:12:10.391159  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:10.391165  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:10.391228  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:10.427446  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:10.427467  319301 cri.go:96] found id: ""
	I1227 20:12:10.427475  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:10.427530  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:10.431147  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:10.431236  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:10.457621  319301 cri.go:96] found id: ""
	I1227 20:12:10.457645  319301 logs.go:282] 0 containers: []
	W1227 20:12:10.457653  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:10.457669  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:10.457680  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:10.497801  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:10.497832  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:10.533576  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:10.533606  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:10.563063  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:10.563092  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:10.595636  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:10.595663  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:10.707654  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:10.707734  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:10.727626  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:10.727752  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:10.859705  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:10.846588    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.847467    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.853805    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.854122    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.855621    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:10.846588    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.847467    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.853805    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.854122    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.855621    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:10.859774  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:10.859801  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:10.958101  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:10.958183  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:11.020263  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:11.020358  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:13.639948  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:13.650732  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:13.650797  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:13.676632  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:13.676651  319301 cri.go:96] found id: ""
	I1227 20:12:13.676658  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:13.676710  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.680432  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:13.680542  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:13.711606  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:13.711625  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:13.711630  319301 cri.go:96] found id: ""
	I1227 20:12:13.711637  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:13.711691  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.715265  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.718775  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:13.718931  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:13.746245  319301 cri.go:96] found id: ""
	I1227 20:12:13.746275  319301 logs.go:282] 0 containers: []
	W1227 20:12:13.746291  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:13.746298  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:13.746374  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:13.779388  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:13.779409  319301 cri.go:96] found id: ""
	I1227 20:12:13.779418  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:13.779504  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.783612  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:13.783685  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:13.808842  319301 cri.go:96] found id: ""
	I1227 20:12:13.808863  319301 logs.go:282] 0 containers: []
	W1227 20:12:13.808872  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:13.808878  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:13.808934  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:13.835153  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:13.835174  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:13.835179  319301 cri.go:96] found id: ""
	I1227 20:12:13.835187  319301 logs.go:282] 2 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:13.835249  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.839009  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.842805  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:13.842881  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:13.872544  319301 cri.go:96] found id: ""
	I1227 20:12:13.872570  319301 logs.go:282] 0 containers: []
	W1227 20:12:13.872579  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:13.872587  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:13.872599  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:13.898550  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:13.898578  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:13.924170  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:13.924197  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:14.003535  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:14.003571  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:14.105189  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:14.105228  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:14.176586  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:14.168398    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.169127    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.170691    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.171292    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.172935    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:14.168398    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.169127    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.170691    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.171292    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.172935    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:14.176608  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:14.176622  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:14.204979  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:14.205007  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:14.246862  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:14.246911  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:14.282199  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:14.282225  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:14.315428  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:14.315459  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:14.334814  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:14.334848  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:16.885569  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:16.896097  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:16.896162  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:16.925765  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:16.925785  319301 cri.go:96] found id: ""
	I1227 20:12:16.925794  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:16.925849  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:16.929283  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:16.929349  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:16.954491  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:16.954515  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:16.954520  319301 cri.go:96] found id: ""
	I1227 20:12:16.954528  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:16.954586  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:16.958221  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:16.961382  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:16.961573  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:16.994836  319301 cri.go:96] found id: ""
	I1227 20:12:16.994860  319301 logs.go:282] 0 containers: []
	W1227 20:12:16.994868  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:16.994874  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:16.994933  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:17.021903  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:17.021926  319301 cri.go:96] found id: ""
	I1227 20:12:17.021934  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:17.022017  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:17.025998  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:17.026093  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:17.052024  319301 cri.go:96] found id: ""
	I1227 20:12:17.052049  319301 logs.go:282] 0 containers: []
	W1227 20:12:17.052058  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:17.052083  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:17.052163  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:17.078719  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:17.078740  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:17.078744  319301 cri.go:96] found id: ""
	I1227 20:12:17.078752  319301 logs.go:282] 2 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:17.078826  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:17.082470  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:17.086147  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:17.086220  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:17.116980  319301 cri.go:96] found id: ""
	I1227 20:12:17.117003  319301 logs.go:282] 0 containers: []
	W1227 20:12:17.117013  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:17.117022  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:17.117033  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:17.196379  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:17.196418  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:17.230926  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:17.230959  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:17.250661  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:17.250691  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:17.322817  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:17.314780    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.315442    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.317018    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.317535    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.319106    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:17.314780    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.315442    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.317018    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.317535    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.319106    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:17.322840  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:17.322856  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:17.351684  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:17.351711  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:17.399098  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:17.399132  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:17.490988  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:17.491023  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:17.556151  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:17.556187  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:17.582835  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:17.582871  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:17.613801  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:17.613837  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:20.145063  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:20.156515  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:20.156583  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:20.187608  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:20.187635  319301 cri.go:96] found id: ""
	I1227 20:12:20.187645  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:20.187707  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.192025  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:20.192105  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:20.224749  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:20.224774  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:20.224780  319301 cri.go:96] found id: ""
	I1227 20:12:20.224788  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:20.224847  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.229081  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.233080  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:20.233183  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:20.265194  319301 cri.go:96] found id: ""
	I1227 20:12:20.265217  319301 logs.go:282] 0 containers: []
	W1227 20:12:20.265226  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:20.265233  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:20.265290  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:20.294941  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:20.294965  319301 cri.go:96] found id: ""
	I1227 20:12:20.294974  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:20.295030  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.299194  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:20.299295  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:20.327103  319301 cri.go:96] found id: ""
	I1227 20:12:20.327127  319301 logs.go:282] 0 containers: []
	W1227 20:12:20.327136  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:20.327142  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:20.327225  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:20.355319  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:20.355340  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:20.355351  319301 cri.go:96] found id: ""
	I1227 20:12:20.355359  319301 logs.go:282] 2 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:20.355441  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.359302  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.362848  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:20.362949  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:20.393433  319301 cri.go:96] found id: ""
	I1227 20:12:20.393488  319301 logs.go:282] 0 containers: []
	W1227 20:12:20.393498  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:20.393527  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:20.393545  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:20.421493  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:20.421522  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:20.498925  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:20.498966  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:20.519854  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:20.519883  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:20.576881  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:20.576922  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:20.621620  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:20.621656  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:20.649613  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:20.649648  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:20.685860  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:20.685889  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:20.779036  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:20.779072  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:20.846477  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:20.838325    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.838829    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.840489    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.841069    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.842962    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:20.838325    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.838829    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.840489    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.841069    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.842962    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:20.846497  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:20.846511  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:20.876493  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:20.876523  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:23.407116  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:23.417842  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:23.417914  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:23.449077  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:23.449100  319301 cri.go:96] found id: ""
	I1227 20:12:23.449108  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:23.449162  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:23.452848  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:23.452918  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:23.481566  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:23.481589  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:23.481595  319301 cri.go:96] found id: ""
	I1227 20:12:23.481602  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:23.481661  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:23.485561  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:23.489363  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:23.489433  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:23.515690  319301 cri.go:96] found id: ""
	I1227 20:12:23.515717  319301 logs.go:282] 0 containers: []
	W1227 20:12:23.515727  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:23.515734  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:23.515796  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:23.542113  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:23.542134  319301 cri.go:96] found id: ""
	I1227 20:12:23.542144  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:23.542198  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:23.546461  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:23.546535  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:23.572051  319301 cri.go:96] found id: ""
	I1227 20:12:23.572080  319301 logs.go:282] 0 containers: []
	W1227 20:12:23.572090  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:23.572096  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:23.572154  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:23.598223  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:23.598246  319301 cri.go:96] found id: ""
	I1227 20:12:23.598254  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:23.598308  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:23.602471  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:23.602548  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:23.632139  319301 cri.go:96] found id: ""
	I1227 20:12:23.632162  319301 logs.go:282] 0 containers: []
	W1227 20:12:23.632171  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:23.632185  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:23.632198  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:23.728534  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:23.728573  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:23.746910  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:23.746937  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:23.790408  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:23.790450  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:23.816648  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:23.816683  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:23.844206  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:23.844234  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:23.922341  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:23.922381  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:23.990219  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:23.981959    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.982768    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.984359    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.984673    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.986151    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:23.981959    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.982768    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.984359    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.984673    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.986151    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:23.990238  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:23.990252  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:24.021769  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:24.021804  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:24.077552  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:24.077591  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:26.612708  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:26.623326  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:26.623428  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:26.653266  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:26.653289  319301 cri.go:96] found id: ""
	I1227 20:12:26.653298  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:26.653373  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:26.657260  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:26.657353  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:26.683071  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:26.683092  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:26.683098  319301 cri.go:96] found id: ""
	I1227 20:12:26.683105  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:26.683166  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:26.686901  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:26.690560  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:26.690649  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:26.718862  319301 cri.go:96] found id: ""
	I1227 20:12:26.718885  319301 logs.go:282] 0 containers: []
	W1227 20:12:26.718894  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:26.718900  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:26.718959  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:26.747552  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:26.747574  319301 cri.go:96] found id: ""
	I1227 20:12:26.747582  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:26.747637  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:26.751375  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:26.751452  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:26.777853  319301 cri.go:96] found id: ""
	I1227 20:12:26.777880  319301 logs.go:282] 0 containers: []
	W1227 20:12:26.777889  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:26.777895  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:26.777957  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:26.804445  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:26.804468  319301 cri.go:96] found id: ""
	I1227 20:12:26.804477  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:26.804535  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:26.808568  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:26.808691  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:26.836896  319301 cri.go:96] found id: ""
	I1227 20:12:26.836922  319301 logs.go:282] 0 containers: []
	W1227 20:12:26.836932  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:26.836945  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:26.836960  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:26.857005  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:26.857033  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:26.928707  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:26.920823    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.921472    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.923023    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.923492    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.925222    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:26.920823    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.921472    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.923023    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.923492    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.925222    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:26.928729  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:26.928742  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:26.956493  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:26.956522  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:26.986280  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:26.986306  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:27.076259  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:27.076295  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:27.172547  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:27.172582  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:27.230338  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:27.230374  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:27.276521  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:27.276554  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:27.308603  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:27.308630  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:29.841840  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:29.852151  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:29.852219  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:29.879885  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:29.879922  319301 cri.go:96] found id: ""
	I1227 20:12:29.879931  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:29.880028  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:29.883662  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:29.883731  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:29.912705  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:29.912727  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:29.912733  319301 cri.go:96] found id: ""
	I1227 20:12:29.912740  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:29.912795  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:29.916252  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:29.921161  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:29.921231  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:29.950824  319301 cri.go:96] found id: ""
	I1227 20:12:29.950846  319301 logs.go:282] 0 containers: []
	W1227 20:12:29.950855  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:29.950862  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:29.950917  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:29.986337  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:29.986357  319301 cri.go:96] found id: ""
	I1227 20:12:29.986365  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:29.986420  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:29.990557  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:29.990644  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:30.034984  319301 cri.go:96] found id: ""
	I1227 20:12:30.035016  319301 logs.go:282] 0 containers: []
	W1227 20:12:30.035027  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:30.035034  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:30.035109  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:30.071248  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:30.071274  319301 cri.go:96] found id: ""
	I1227 20:12:30.071284  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:30.071380  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:30.075947  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:30.076061  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:30.105680  319301 cri.go:96] found id: ""
	I1227 20:12:30.105705  319301 logs.go:282] 0 containers: []
	W1227 20:12:30.105715  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:30.105730  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:30.105748  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:30.135961  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:30.135994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:30.216289  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:30.216331  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:30.255913  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:30.255946  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:30.355835  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:30.355870  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:30.429441  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:30.421794    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.422353    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.423860    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.424337    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.426060    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:30.421794    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.422353    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.423860    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.424337    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.426060    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:30.429483  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:30.429495  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:30.458949  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:30.458978  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:30.502640  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:30.502677  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:30.532992  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:30.533023  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:30.557835  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:30.557866  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:33.116429  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:33.127018  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:33.127132  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:33.153291  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:33.153316  319301 cri.go:96] found id: ""
	I1227 20:12:33.153324  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:33.153379  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:33.157166  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:33.157239  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:33.183179  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:33.183200  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:33.183205  319301 cri.go:96] found id: ""
	I1227 20:12:33.183213  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:33.183265  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:33.186752  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:33.190422  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:33.190494  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:33.220717  319301 cri.go:96] found id: ""
	I1227 20:12:33.220739  319301 logs.go:282] 0 containers: []
	W1227 20:12:33.220748  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:33.220754  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:33.220818  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:33.251060  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:33.251083  319301 cri.go:96] found id: ""
	I1227 20:12:33.251091  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:33.251145  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:33.254679  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:33.254748  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:33.286493  319301 cri.go:96] found id: ""
	I1227 20:12:33.286518  319301 logs.go:282] 0 containers: []
	W1227 20:12:33.286527  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:33.286533  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:33.286620  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:33.313587  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:33.313613  319301 cri.go:96] found id: ""
	I1227 20:12:33.313622  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:33.313680  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:33.317328  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:33.317408  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:33.343846  319301 cri.go:96] found id: ""
	I1227 20:12:33.343871  319301 logs.go:282] 0 containers: []
	W1227 20:12:33.343880  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:33.343893  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:33.343925  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:33.438565  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:33.438603  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:33.457675  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:33.457705  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:33.525788  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:33.517888    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.518628    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.520164    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.520718    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.522282    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:33.517888    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.518628    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.520164    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.520718    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.522282    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:33.525811  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:33.525825  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:33.552529  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:33.552556  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:33.580140  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:33.580172  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:33.641393  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:33.641499  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:33.693161  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:33.693199  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:33.724867  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:33.724893  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:33.805497  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:33.805537  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:36.337435  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:36.352136  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:36.352206  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:36.378464  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:36.378486  319301 cri.go:96] found id: ""
	I1227 20:12:36.378494  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:36.378548  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:36.382431  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:36.382500  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:36.408340  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:36.408362  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:36.408367  319301 cri.go:96] found id: ""
	I1227 20:12:36.408375  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:36.408430  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:36.411977  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:36.415450  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:36.415561  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:36.441750  319301 cri.go:96] found id: ""
	I1227 20:12:36.441773  319301 logs.go:282] 0 containers: []
	W1227 20:12:36.441781  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:36.441789  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:36.441849  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:36.469111  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:36.469133  319301 cri.go:96] found id: ""
	I1227 20:12:36.469141  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:36.469193  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:36.472982  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:36.473055  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:36.501345  319301 cri.go:96] found id: ""
	I1227 20:12:36.501368  319301 logs.go:282] 0 containers: []
	W1227 20:12:36.501378  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:36.501384  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:36.501477  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:36.527577  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:36.527600  319301 cri.go:96] found id: ""
	I1227 20:12:36.527608  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:36.527664  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:36.531477  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:36.531552  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:36.561054  319301 cri.go:96] found id: ""
	I1227 20:12:36.561130  319301 logs.go:282] 0 containers: []
	W1227 20:12:36.561154  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:36.561181  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:36.561217  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:36.589983  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:36.590014  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:36.669955  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:36.669994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:36.768958  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:36.768994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:36.787310  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:36.787336  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:36.856793  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:36.848163    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.849099    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.850911    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.851491    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.853132    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:36.848163    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.849099    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.850911    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.851491    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.853132    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:36.856819  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:36.856834  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:36.909328  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:36.909366  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:36.960708  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:36.960741  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:36.988799  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:36.988826  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:37.020389  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:37.020426  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:39.556036  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:39.567454  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:39.567523  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:39.597767  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:39.597789  319301 cri.go:96] found id: ""
	I1227 20:12:39.597797  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:39.597853  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:39.601347  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:39.601417  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:39.630309  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:39.630330  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:39.630335  319301 cri.go:96] found id: ""
	I1227 20:12:39.630343  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:39.630395  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:39.634109  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:39.637369  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:39.637474  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:39.664492  319301 cri.go:96] found id: ""
	I1227 20:12:39.664515  319301 logs.go:282] 0 containers: []
	W1227 20:12:39.664523  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:39.664536  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:39.664595  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:39.689554  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:39.689585  319301 cri.go:96] found id: ""
	I1227 20:12:39.689594  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:39.689648  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:39.693184  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:39.693251  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:39.719030  319301 cri.go:96] found id: ""
	I1227 20:12:39.719057  319301 logs.go:282] 0 containers: []
	W1227 20:12:39.719066  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:39.719073  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:39.719131  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:39.751945  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:39.751967  319301 cri.go:96] found id: ""
	I1227 20:12:39.751976  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:39.752058  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:39.755910  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:39.755984  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:39.787281  319301 cri.go:96] found id: ""
	I1227 20:12:39.787306  319301 logs.go:282] 0 containers: []
	W1227 20:12:39.787315  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:39.787329  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:39.787341  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:39.818112  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:39.818181  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:39.877195  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:39.877228  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:39.902875  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:39.902908  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:39.933383  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:39.933411  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:39.964696  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:39.964725  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:40.094427  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:40.094546  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:40.115127  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:40.115169  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:40.188369  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:40.178140    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.178935    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.180929    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.181956    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.182727    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:40.178140    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.178935    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.180929    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.181956    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.182727    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:40.188403  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:40.188417  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:40.248250  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:40.248293  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:42.832956  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:42.843630  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:42.843716  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:42.880632  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:42.880654  319301 cri.go:96] found id: ""
	I1227 20:12:42.880662  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:42.880716  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:42.884197  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:42.884283  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:42.912329  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:42.912351  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:42.912356  319301 cri.go:96] found id: ""
	I1227 20:12:42.912363  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:42.912420  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:42.919733  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:42.924460  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:42.924555  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:42.950089  319301 cri.go:96] found id: ""
	I1227 20:12:42.950112  319301 logs.go:282] 0 containers: []
	W1227 20:12:42.950120  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:42.950126  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:42.950186  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:42.982372  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:42.982393  319301 cri.go:96] found id: ""
	I1227 20:12:42.982400  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:42.982454  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:42.985981  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:42.986048  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:43.025247  319301 cri.go:96] found id: ""
	I1227 20:12:43.025270  319301 logs.go:282] 0 containers: []
	W1227 20:12:43.025279  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:43.025285  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:43.025345  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:43.051039  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:43.051058  319301 cri.go:96] found id: ""
	I1227 20:12:43.051066  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:43.051128  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:43.055686  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:43.055774  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:43.080239  319301 cri.go:96] found id: ""
	I1227 20:12:43.080305  319301 logs.go:282] 0 containers: []
	W1227 20:12:43.080328  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:43.080365  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:43.080392  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:43.117618  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:43.117647  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:43.203203  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:43.203243  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:43.233482  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:43.233514  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:43.331030  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:43.331068  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:43.400596  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:43.391562    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.392218    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.393995    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.395389    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.396936    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:43.391562    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.392218    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.393995    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.395389    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.396936    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:43.400620  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:43.400635  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:43.451280  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:43.451316  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:43.469068  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:43.469097  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:43.497581  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:43.497607  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:43.541271  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:43.541307  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:46.066721  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:46.077342  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:46.077418  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:46.106073  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:46.106096  319301 cri.go:96] found id: ""
	I1227 20:12:46.106105  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:46.106161  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:46.110573  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:46.110647  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:46.141403  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:46.141426  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:46.141431  319301 cri.go:96] found id: ""
	I1227 20:12:46.141438  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:46.141524  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:46.146711  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:46.150119  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:46.150207  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:46.177378  319301 cri.go:96] found id: ""
	I1227 20:12:46.177403  319301 logs.go:282] 0 containers: []
	W1227 20:12:46.177411  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:46.177418  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:46.177523  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:46.203465  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:46.203488  319301 cri.go:96] found id: ""
	I1227 20:12:46.203497  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:46.203554  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:46.207163  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:46.207260  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:46.232721  319301 cri.go:96] found id: ""
	I1227 20:12:46.232748  319301 logs.go:282] 0 containers: []
	W1227 20:12:46.232757  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:46.232764  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:46.232849  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:46.260899  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:46.260924  319301 cri.go:96] found id: ""
	I1227 20:12:46.260933  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:46.261004  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:46.264880  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:46.264994  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:46.294702  319301 cri.go:96] found id: ""
	I1227 20:12:46.294772  319301 logs.go:282] 0 containers: []
	W1227 20:12:46.294788  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:46.294802  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:46.294815  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:46.392870  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:46.392907  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:46.411136  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:46.411165  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:46.442076  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:46.442105  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:46.507864  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:46.500419    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.500963    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.502621    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.503081    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.504499    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:46.500419    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.500963    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.502621    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.503081    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.504499    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:46.507887  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:46.507900  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:46.534504  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:46.534534  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:46.599046  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:46.599082  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:46.644197  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:46.644234  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:46.674716  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:46.674743  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:46.703463  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:46.703492  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:49.285570  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:49.295868  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:49.295960  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:49.323445  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:49.323469  319301 cri.go:96] found id: ""
	I1227 20:12:49.323477  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:49.323567  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:49.327039  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:49.327106  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:49.353757  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:49.353781  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:49.353787  319301 cri.go:96] found id: ""
	I1227 20:12:49.353794  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:49.353854  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:49.360531  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:49.364480  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:49.364568  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:49.392254  319301 cri.go:96] found id: ""
	I1227 20:12:49.392325  319301 logs.go:282] 0 containers: []
	W1227 20:12:49.392349  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:49.392374  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:49.392458  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:49.422197  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:49.422218  319301 cri.go:96] found id: ""
	I1227 20:12:49.422226  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:49.422279  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:49.425742  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:49.425813  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:49.451624  319301 cri.go:96] found id: ""
	I1227 20:12:49.451650  319301 logs.go:282] 0 containers: []
	W1227 20:12:49.451659  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:49.451665  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:49.451725  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:49.477813  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:49.477836  319301 cri.go:96] found id: ""
	I1227 20:12:49.477846  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:49.477911  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:49.481531  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:49.481625  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:49.507374  319301 cri.go:96] found id: ""
	I1227 20:12:49.507400  319301 logs.go:282] 0 containers: []
	W1227 20:12:49.507409  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:49.507425  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:49.507438  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:49.598294  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:49.598336  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:49.636279  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:49.636307  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:49.707651  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:49.707686  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:49.765937  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:49.765972  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:49.783282  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:49.783310  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:49.868264  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:49.856321    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.857001    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.858772    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.863251    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.863608    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:49.856321    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.857001    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.858772    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.863251    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.863608    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:49.868294  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:49.868307  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:49.894496  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:49.894524  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:49.919827  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:49.919864  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:50.000367  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:50.000443  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:52.556360  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:52.566511  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:52.566580  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:52.593484  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:52.593517  319301 cri.go:96] found id: ""
	I1227 20:12:52.593527  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:52.593640  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:52.597279  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:52.597349  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:52.623469  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:52.623547  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:52.623568  319301 cri.go:96] found id: ""
	I1227 20:12:52.623591  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:52.623659  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:52.627305  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:52.630834  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:52.630949  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:52.657093  319301 cri.go:96] found id: ""
	I1227 20:12:52.657120  319301 logs.go:282] 0 containers: []
	W1227 20:12:52.657130  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:52.657136  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:52.657201  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:52.683396  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:52.683470  319301 cri.go:96] found id: ""
	I1227 20:12:52.683487  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:52.683556  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:52.687311  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:52.687381  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:52.716233  319301 cri.go:96] found id: ""
	I1227 20:12:52.716257  319301 logs.go:282] 0 containers: []
	W1227 20:12:52.716266  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:52.716273  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:52.716333  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:52.742458  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:52.742482  319301 cri.go:96] found id: ""
	I1227 20:12:52.742491  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:52.742547  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:52.746498  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:52.746629  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:52.771746  319301 cri.go:96] found id: ""
	I1227 20:12:52.771772  319301 logs.go:282] 0 containers: []
	W1227 20:12:52.771781  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:52.771820  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:52.771837  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:52.824894  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:52.824929  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:52.854289  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:52.854318  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:52.889855  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:52.889887  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:52.993260  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:52.993294  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:53.038574  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:53.038617  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:53.071005  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:53.071035  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:53.149881  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:53.149919  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:53.167391  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:53.167547  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:53.240789  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:53.230138    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.230860    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.232667    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.233277    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.236557    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:53.230138    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.230860    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.232667    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.233277    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.236557    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:53.240810  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:53.240823  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:55.779743  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:55.790606  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:55.790677  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:55.817091  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:55.817112  319301 cri.go:96] found id: ""
	I1227 20:12:55.817121  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:55.817176  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:55.820799  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:55.820876  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:55.850874  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:55.850897  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:55.850903  319301 cri.go:96] found id: ""
	I1227 20:12:55.850911  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:55.850964  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:55.854708  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:55.858278  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:55.858347  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:55.887432  319301 cri.go:96] found id: ""
	I1227 20:12:55.887456  319301 logs.go:282] 0 containers: []
	W1227 20:12:55.887465  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:55.887471  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:55.887526  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:55.914817  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:55.914839  319301 cri.go:96] found id: ""
	I1227 20:12:55.914847  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:55.914903  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:55.918494  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:55.918571  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:55.948625  319301 cri.go:96] found id: ""
	I1227 20:12:55.948648  319301 logs.go:282] 0 containers: []
	W1227 20:12:55.948657  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:55.948664  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:55.948733  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:55.984844  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:55.984867  319301 cri.go:96] found id: ""
	I1227 20:12:55.984875  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:55.984930  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:55.988564  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:55.988652  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:56.016926  319301 cri.go:96] found id: ""
	I1227 20:12:56.016956  319301 logs.go:282] 0 containers: []
	W1227 20:12:56.016966  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:56.016982  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:56.016994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:56.118289  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:56.118325  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:56.136502  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:56.136532  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:56.169081  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:56.169108  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:56.211041  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:56.211076  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:56.243209  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:56.243244  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:56.314060  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:56.305651    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.306321    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.307810    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.308362    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.310021    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:56.305651    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.306321    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.307810    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.308362    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.310021    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:56.314082  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:56.314098  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:56.377302  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:56.377341  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:56.410912  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:56.410991  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:56.438190  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:56.438218  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:59.018860  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:59.029806  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:59.029879  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:59.058607  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:59.058631  319301 cri.go:96] found id: ""
	I1227 20:12:59.058640  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:59.058697  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:59.062467  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:59.062544  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:59.091353  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:59.091376  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:59.091382  319301 cri.go:96] found id: ""
	I1227 20:12:59.091389  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:59.091445  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:59.095198  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:59.100058  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:59.100137  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:59.126292  319301 cri.go:96] found id: ""
	I1227 20:12:59.126317  319301 logs.go:282] 0 containers: []
	W1227 20:12:59.126326  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:59.126333  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:59.126397  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:59.155155  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:59.155177  319301 cri.go:96] found id: ""
	I1227 20:12:59.155186  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:59.155242  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:59.158920  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:59.158992  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:59.189092  319301 cri.go:96] found id: ""
	I1227 20:12:59.189159  319301 logs.go:282] 0 containers: []
	W1227 20:12:59.189181  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:59.189206  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:59.189294  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:59.216198  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:59.216262  319301 cri.go:96] found id: ""
	I1227 20:12:59.216285  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:59.216377  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:59.224385  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:59.224486  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:59.252259  319301 cri.go:96] found id: ""
	I1227 20:12:59.252285  319301 logs.go:282] 0 containers: []
	W1227 20:12:59.252294  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:59.252309  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:59.252342  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:59.273005  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:59.273034  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:59.301850  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:59.301881  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:59.356187  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:59.356221  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:59.399819  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:59.399852  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:59.433910  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:59.433941  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:59.513398  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:59.513432  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:59.549380  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:59.549409  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:59.623298  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:59.615506    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.615904    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.617387    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.618024    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.619495    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:59.615506    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.615904    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.617387    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.618024    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.619495    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:59.623322  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:59.623336  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:59.649178  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:59.649207  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:02.243275  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:02.254105  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:02.254177  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:02.286583  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:02.286605  319301 cri.go:96] found id: ""
	I1227 20:13:02.286613  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:02.286669  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:02.290640  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:02.290708  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:02.317723  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:02.317746  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:02.317752  319301 cri.go:96] found id: ""
	I1227 20:13:02.317760  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:02.317817  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:02.322227  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:02.325742  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:02.325814  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:02.352306  319301 cri.go:96] found id: ""
	I1227 20:13:02.352333  319301 logs.go:282] 0 containers: []
	W1227 20:13:02.352342  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:02.352349  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:02.352409  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:02.378873  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:02.378896  319301 cri.go:96] found id: ""
	I1227 20:13:02.378906  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:02.378961  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:02.383556  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:02.383681  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:02.421495  319301 cri.go:96] found id: ""
	I1227 20:13:02.421526  319301 logs.go:282] 0 containers: []
	W1227 20:13:02.421550  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:02.421579  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:02.421661  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:02.454963  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:02.454985  319301 cri.go:96] found id: ""
	I1227 20:13:02.454994  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:02.455071  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:02.458781  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:02.458901  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:02.488822  319301 cri.go:96] found id: ""
	I1227 20:13:02.488848  319301 logs.go:282] 0 containers: []
	W1227 20:13:02.488857  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:02.488872  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:02.488904  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:02.513914  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:02.513945  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:02.543786  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:02.543815  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:02.602843  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:02.602877  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:02.634221  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:02.634257  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:02.736305  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:02.736347  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:02.812827  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:02.803912    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.804866    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.806654    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.807254    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.808858    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:02.803912    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.804866    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.806654    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.807254    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.808858    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:02.812848  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:02.812861  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:02.870730  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:02.870770  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:02.896826  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:02.896857  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:02.928575  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:02.928604  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:05.512539  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:05.522703  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:05.522777  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:05.549167  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:05.549187  319301 cri.go:96] found id: ""
	I1227 20:13:05.549195  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:05.549252  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:05.553114  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:05.553224  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:05.591305  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:05.591329  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:05.591334  319301 cri.go:96] found id: ""
	I1227 20:13:05.591342  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:05.591399  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:05.595292  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:05.598966  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:05.599090  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:05.626541  319301 cri.go:96] found id: ""
	I1227 20:13:05.626567  319301 logs.go:282] 0 containers: []
	W1227 20:13:05.626576  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:05.626583  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:05.626644  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:05.658675  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:05.658707  319301 cri.go:96] found id: ""
	I1227 20:13:05.658715  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:05.658771  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:05.662500  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:05.662571  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:05.694208  319301 cri.go:96] found id: ""
	I1227 20:13:05.694232  319301 logs.go:282] 0 containers: []
	W1227 20:13:05.694241  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:05.694248  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:05.694310  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:05.721109  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:05.721133  319301 cri.go:96] found id: ""
	I1227 20:13:05.721152  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:05.721212  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:05.724940  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:05.725010  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:05.751566  319301 cri.go:96] found id: ""
	I1227 20:13:05.751594  319301 logs.go:282] 0 containers: []
	W1227 20:13:05.751604  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:05.751643  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:05.751660  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:05.849663  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:05.849750  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:05.868576  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:05.868607  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:05.934428  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:05.925753    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.926400    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.928037    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.928648    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.930245    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:05.925753    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.926400    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.928037    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.928648    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.930245    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:05.934452  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:05.934466  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:05.965352  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:05.965378  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:06.020452  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:06.020494  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:06.054720  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:06.054750  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:06.084316  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:06.084346  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:06.166870  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:06.166934  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:06.221058  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:06.221095  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:08.753099  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:08.764525  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:08.764592  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:08.790692  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:08.790714  319301 cri.go:96] found id: ""
	I1227 20:13:08.790725  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:08.790781  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:08.794565  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:08.794679  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:08.820711  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:08.820730  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:08.820734  319301 cri.go:96] found id: ""
	I1227 20:13:08.820741  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:08.820797  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:08.824460  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:08.827902  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:08.827991  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:08.869147  319301 cri.go:96] found id: ""
	I1227 20:13:08.869171  319301 logs.go:282] 0 containers: []
	W1227 20:13:08.869184  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:08.869190  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:08.869273  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:08.897503  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:08.897528  319301 cri.go:96] found id: ""
	I1227 20:13:08.897545  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:08.897605  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:08.902138  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:08.902257  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:08.931144  319301 cri.go:96] found id: ""
	I1227 20:13:08.931168  319301 logs.go:282] 0 containers: []
	W1227 20:13:08.931177  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:08.931183  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:08.931240  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:08.958779  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:08.958802  319301 cri.go:96] found id: ""
	I1227 20:13:08.958810  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:08.958892  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:08.962888  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:08.962966  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:08.991222  319301 cri.go:96] found id: ""
	I1227 20:13:08.991248  319301 logs.go:282] 0 containers: []
	W1227 20:13:08.991257  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:08.991270  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:08.991310  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:09.009225  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:09.009256  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:09.081569  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:09.073722    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.074157    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.075724    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.076257    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.078038    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:09.073722    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.074157    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.075724    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.076257    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.078038    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:09.081592  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:09.081608  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:09.112754  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:09.112780  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:09.163779  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:09.163815  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:09.189441  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:09.189512  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:09.271488  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:09.271569  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:09.314936  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:09.314962  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:09.413305  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:09.413344  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:09.465609  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:09.465639  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:12.002552  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:12.014182  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:12.014264  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:12.052377  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:12.052400  319301 cri.go:96] found id: ""
	I1227 20:13:12.052409  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:12.052466  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:12.056292  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:12.056394  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:12.085743  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:12.085765  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:12.085770  319301 cri.go:96] found id: ""
	I1227 20:13:12.085778  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:12.085835  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:12.089812  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:12.093801  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:12.093896  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:12.122289  319301 cri.go:96] found id: ""
	I1227 20:13:12.122359  319301 logs.go:282] 0 containers: []
	W1227 20:13:12.122386  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:12.122402  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:12.122476  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:12.149731  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:12.149758  319301 cri.go:96] found id: ""
	I1227 20:13:12.149767  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:12.149823  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:12.153602  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:12.153688  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:12.178711  319301 cri.go:96] found id: ""
	I1227 20:13:12.178786  319301 logs.go:282] 0 containers: []
	W1227 20:13:12.178808  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:12.178832  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:12.178917  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:12.205322  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:12.205350  319301 cri.go:96] found id: ""
	I1227 20:13:12.205360  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:12.205414  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:12.209024  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:12.209091  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:12.234488  319301 cri.go:96] found id: ""
	I1227 20:13:12.234557  319301 logs.go:282] 0 containers: []
	W1227 20:13:12.234582  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:12.234609  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:12.234640  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:12.261610  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:12.261639  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:12.315635  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:12.315673  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:12.376280  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:12.376313  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:12.402133  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:12.402165  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:12.430982  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:12.431051  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:12.512045  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:12.512078  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:12.530685  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:12.530716  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:12.568375  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:12.568405  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:12.668785  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:12.668822  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:12.735523  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:12.727415    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.728180    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.729943    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.730267    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.732211    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:12.727415    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.728180    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.729943    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.730267    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.732211    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:15.236014  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:15.247391  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:15.247466  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:15.277268  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:15.277342  319301 cri.go:96] found id: ""
	I1227 20:13:15.277365  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:15.277488  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:15.282305  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:15.282373  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:15.312415  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:15.312436  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:15.312441  319301 cri.go:96] found id: ""
	I1227 20:13:15.312449  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:15.312503  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:15.316541  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:15.319901  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:15.319970  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:15.346399  319301 cri.go:96] found id: ""
	I1227 20:13:15.346424  319301 logs.go:282] 0 containers: []
	W1227 20:13:15.346432  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:15.346439  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:15.346496  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:15.373083  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:15.373104  319301 cri.go:96] found id: ""
	I1227 20:13:15.373112  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:15.373165  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:15.376806  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:15.376918  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:15.401683  319301 cri.go:96] found id: ""
	I1227 20:13:15.401708  319301 logs.go:282] 0 containers: []
	W1227 20:13:15.401717  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:15.401725  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:15.401784  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:15.425772  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:15.425796  319301 cri.go:96] found id: ""
	I1227 20:13:15.425804  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:15.425865  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:15.429359  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:15.429426  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:15.457327  319301 cri.go:96] found id: ""
	I1227 20:13:15.457352  319301 logs.go:282] 0 containers: []
	W1227 20:13:15.457361  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:15.457374  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:15.457387  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:15.499826  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:15.499863  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:15.530003  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:15.530040  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:15.557784  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:15.557811  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:15.637950  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:15.637987  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:15.706856  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:15.696364    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.696954    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.699252    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.700375    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.701334    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:15.696364    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.696954    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.699252    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.700375    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.701334    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:15.706878  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:15.706893  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:15.742198  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:15.742227  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:15.838586  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:15.838624  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:15.857986  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:15.858016  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:15.889281  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:15.889313  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:18.468232  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:18.478612  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:18.478682  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:18.506032  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:18.506056  319301 cri.go:96] found id: ""
	I1227 20:13:18.506064  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:18.506116  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:18.509751  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:18.509832  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:18.537503  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:18.537527  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:18.537533  319301 cri.go:96] found id: ""
	I1227 20:13:18.537541  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:18.537645  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:18.543736  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:18.548696  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:18.548770  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:18.574950  319301 cri.go:96] found id: ""
	I1227 20:13:18.574986  319301 logs.go:282] 0 containers: []
	W1227 20:13:18.574996  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:18.575003  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:18.575063  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:18.603311  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:18.603330  319301 cri.go:96] found id: ""
	I1227 20:13:18.603337  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:18.603391  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:18.607317  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:18.607399  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:18.637190  319301 cri.go:96] found id: ""
	I1227 20:13:18.637214  319301 logs.go:282] 0 containers: []
	W1227 20:13:18.637223  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:18.637230  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:18.637290  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:18.664240  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:18.664260  319301 cri.go:96] found id: ""
	I1227 20:13:18.664268  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:18.664323  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:18.667779  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:18.667845  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:18.694174  319301 cri.go:96] found id: ""
	I1227 20:13:18.694198  319301 logs.go:282] 0 containers: []
	W1227 20:13:18.694208  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:18.694222  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:18.694235  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:18.718997  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:18.719027  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:18.745989  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:18.746067  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:18.822381  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:18.822419  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:18.867357  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:18.867387  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:18.970030  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:18.970069  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:18.991124  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:18.991208  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:19.073512  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:19.064985    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.065841    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.067396    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.067963    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.069601    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:19.064985    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.065841    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.067396    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.067963    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.069601    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:19.073537  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:19.073559  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:19.102691  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:19.102717  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:19.156409  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:19.156445  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:21.705847  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:21.716387  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:21.716462  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:21.750665  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:21.750735  319301 cri.go:96] found id: ""
	I1227 20:13:21.750770  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:21.750862  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:21.754653  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:21.754723  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:21.779914  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:21.779938  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:21.779944  319301 cri.go:96] found id: ""
	I1227 20:13:21.779952  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:21.780015  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:21.783993  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:21.787625  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:21.787696  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:21.813514  319301 cri.go:96] found id: ""
	I1227 20:13:21.813543  319301 logs.go:282] 0 containers: []
	W1227 20:13:21.813552  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:21.813559  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:21.813629  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:21.844946  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:21.844968  319301 cri.go:96] found id: ""
	I1227 20:13:21.844976  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:21.845035  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:21.848813  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:21.848884  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:21.874101  319301 cri.go:96] found id: ""
	I1227 20:13:21.874174  319301 logs.go:282] 0 containers: []
	W1227 20:13:21.874190  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:21.874197  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:21.874255  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:21.900432  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:21.900455  319301 cri.go:96] found id: ""
	I1227 20:13:21.900463  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:21.900518  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:21.904020  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:21.904092  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:21.931082  319301 cri.go:96] found id: ""
	I1227 20:13:21.931107  319301 logs.go:282] 0 containers: []
	W1227 20:13:21.931116  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:21.931130  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:21.931173  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:21.977536  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:21.977621  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:22.057131  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:22.057167  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:22.162849  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:22.162890  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:22.181044  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:22.181074  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:22.251501  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:22.243628    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.244178    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.245787    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.246465    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.248081    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:22.243628    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.244178    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.245787    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.246465    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.248081    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:22.251520  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:22.251532  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:22.322039  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:22.322076  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:22.348945  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:22.348981  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:22.376440  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:22.376468  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:22.411192  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:22.411219  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:24.942580  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:24.952758  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:24.952881  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:24.984548  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:24.984572  319301 cri.go:96] found id: ""
	I1227 20:13:24.984580  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:24.984656  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:24.988133  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:24.988203  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:25.026479  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:25.026581  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:25.026603  319301 cri.go:96] found id: ""
	I1227 20:13:25.026645  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:25.026785  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:25.030841  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:25.034716  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:25.034800  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:25.061711  319301 cri.go:96] found id: ""
	I1227 20:13:25.061738  319301 logs.go:282] 0 containers: []
	W1227 20:13:25.061747  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:25.061753  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:25.061810  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:25.089318  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:25.089386  319301 cri.go:96] found id: ""
	I1227 20:13:25.089409  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:25.089517  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:25.093670  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:25.093795  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:25.121407  319301 cri.go:96] found id: ""
	I1227 20:13:25.121525  319301 logs.go:282] 0 containers: []
	W1227 20:13:25.121549  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:25.121569  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:25.121669  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:25.149007  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:25.149080  319301 cri.go:96] found id: ""
	I1227 20:13:25.149103  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:25.149187  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:25.153407  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:25.153596  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:25.179032  319301 cri.go:96] found id: ""
	I1227 20:13:25.179057  319301 logs.go:282] 0 containers: []
	W1227 20:13:25.179066  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:25.179079  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:25.179090  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:25.276200  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:25.276277  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:25.348617  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:25.340243    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.340862    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.343120    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.343588    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.345111    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:25.340243    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.340862    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.343120    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.343588    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.345111    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:25.348638  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:25.348655  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:25.406272  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:25.406306  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:25.452731  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:25.452768  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:25.480251  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:25.480280  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:25.557948  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:25.557985  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:25.593809  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:25.593838  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:25.615397  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:25.615429  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:25.646218  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:25.646248  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:28.174341  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:28.185173  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:28.185244  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:28.211104  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:28.211127  319301 cri.go:96] found id: ""
	I1227 20:13:28.211136  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:28.211191  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:28.214901  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:28.215009  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:28.246215  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:28.246280  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:28.246301  319301 cri.go:96] found id: ""
	I1227 20:13:28.246324  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:28.246405  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:28.250387  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:28.253817  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:28.253888  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:28.287626  319301 cri.go:96] found id: ""
	I1227 20:13:28.287651  319301 logs.go:282] 0 containers: []
	W1227 20:13:28.287659  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:28.287665  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:28.287725  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:28.316933  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:28.316954  319301 cri.go:96] found id: ""
	I1227 20:13:28.316962  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:28.317018  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:28.320933  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:28.321004  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:28.347084  319301 cri.go:96] found id: ""
	I1227 20:13:28.347112  319301 logs.go:282] 0 containers: []
	W1227 20:13:28.347122  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:28.347128  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:28.347185  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:28.378083  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:28.378106  319301 cri.go:96] found id: ""
	I1227 20:13:28.378115  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:28.378169  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:28.382099  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:28.382172  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:28.409209  319301 cri.go:96] found id: ""
	I1227 20:13:28.409235  319301 logs.go:282] 0 containers: []
	W1227 20:13:28.409244  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:28.409257  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:28.409270  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:28.427091  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:28.427120  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:28.490226  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:28.482506    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.483031    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.484594    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.484922    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.486441    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:28.482506    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.483031    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.484594    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.484922    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.486441    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:28.490251  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:28.490265  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:28.531892  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:28.531924  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:28.557604  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:28.557631  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:28.652391  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:28.652428  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:28.680025  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:28.680051  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:28.737147  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:28.737182  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:28.765648  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:28.765682  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:28.843337  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:28.843374  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:31.382818  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:31.393355  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:31.393426  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:31.420305  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:31.420328  319301 cri.go:96] found id: ""
	I1227 20:13:31.420336  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:31.420391  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:31.424001  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:31.424074  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:31.460581  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:31.460615  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:31.460621  319301 cri.go:96] found id: ""
	I1227 20:13:31.460635  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:31.460702  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:31.464544  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:31.468299  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:31.468414  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:31.500491  319301 cri.go:96] found id: ""
	I1227 20:13:31.500517  319301 logs.go:282] 0 containers: []
	W1227 20:13:31.500526  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:31.500533  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:31.500590  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:31.527178  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:31.527203  319301 cri.go:96] found id: ""
	I1227 20:13:31.527211  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:31.527273  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:31.530886  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:31.530980  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:31.558444  319301 cri.go:96] found id: ""
	I1227 20:13:31.558466  319301 logs.go:282] 0 containers: []
	W1227 20:13:31.558475  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:31.558482  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:31.558583  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:31.583987  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:31.584010  319301 cri.go:96] found id: ""
	I1227 20:13:31.584019  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:31.584072  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:31.587656  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:31.587728  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:31.613640  319301 cri.go:96] found id: ""
	I1227 20:13:31.613662  319301 logs.go:282] 0 containers: []
	W1227 20:13:31.613671  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:31.613692  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:31.613708  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:31.642242  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:31.642274  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:31.724401  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:31.724439  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:31.793926  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:31.785945    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.786581    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.788181    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.788659    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.789864    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:31.785945    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.786581    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.788181    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.788659    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.789864    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:31.793989  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:31.794011  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:31.825164  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:31.825193  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:31.877179  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:31.877211  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:31.912284  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:31.912319  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:32.015514  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:32.015558  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:32.034674  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:32.034705  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:32.099008  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:32.099062  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:34.634778  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:34.656177  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:34.656243  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:34.684782  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:34.684801  319301 cri.go:96] found id: ""
	I1227 20:13:34.684810  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:34.684865  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:34.688514  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:34.688585  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:34.712895  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:34.712915  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:34.712921  319301 cri.go:96] found id: ""
	I1227 20:13:34.712928  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:34.712995  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:34.716706  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:34.720270  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:34.720346  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:34.746430  319301 cri.go:96] found id: ""
	I1227 20:13:34.746456  319301 logs.go:282] 0 containers: []
	W1227 20:13:34.746465  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:34.746472  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:34.746530  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:34.773423  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:34.773481  319301 cri.go:96] found id: ""
	I1227 20:13:34.773490  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:34.773560  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:34.777238  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:34.777325  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:34.804429  319301 cri.go:96] found id: ""
	I1227 20:13:34.804455  319301 logs.go:282] 0 containers: []
	W1227 20:13:34.804464  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:34.804471  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:34.804528  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:34.837390  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:34.837412  319301 cri.go:96] found id: ""
	I1227 20:13:34.837421  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:34.837518  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:34.841292  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:34.841362  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:34.882512  319301 cri.go:96] found id: ""
	I1227 20:13:34.882537  319301 logs.go:282] 0 containers: []
	W1227 20:13:34.882547  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:34.882561  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:34.882593  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:34.935722  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:34.935778  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:34.963786  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:34.963815  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:35.068786  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:35.068824  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:35.118359  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:35.118402  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:35.146117  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:35.146144  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:35.223101  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:35.223145  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:35.255059  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:35.255089  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:35.276475  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:35.276510  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:35.351174  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:35.342460    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.343305    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.344856    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.345617    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.347573    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:35.342460    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.343305    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.344856    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.345617    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.347573    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:35.351239  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:35.351268  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:37.881796  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:37.894482  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:37.894556  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:37.924732  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:37.924756  319301 cri.go:96] found id: ""
	I1227 20:13:37.924765  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:37.924821  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:37.928636  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:37.928711  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:37.956752  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:37.956775  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:37.956781  319301 cri.go:96] found id: ""
	I1227 20:13:37.956801  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:37.956860  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:37.960536  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:37.964778  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:37.964879  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:37.998167  319301 cri.go:96] found id: ""
	I1227 20:13:37.998192  319301 logs.go:282] 0 containers: []
	W1227 20:13:37.998202  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:37.998208  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:37.998268  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:38.027828  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:38.027903  319301 cri.go:96] found id: ""
	I1227 20:13:38.027928  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:38.028019  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:38.032285  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:38.032374  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:38.063193  319301 cri.go:96] found id: ""
	I1227 20:13:38.063219  319301 logs.go:282] 0 containers: []
	W1227 20:13:38.063238  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:38.063277  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:38.063338  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:38.100160  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:38.100184  319301 cri.go:96] found id: ""
	I1227 20:13:38.100192  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:38.100248  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:38.104272  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:38.104360  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:38.132286  319301 cri.go:96] found id: ""
	I1227 20:13:38.132319  319301 logs.go:282] 0 containers: []
	W1227 20:13:38.132329  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:38.132343  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:38.132355  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:38.163697  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:38.163723  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:38.181632  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:38.181662  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:38.210225  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:38.210258  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:38.255805  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:38.255842  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:38.358465  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:38.358500  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:38.425713  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:38.417673    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.418194    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.420263    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.420756    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.422182    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:38.417673    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.418194    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.420263    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.420756    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.422182    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:38.425743  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:38.425766  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:38.481423  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:38.481466  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:38.506752  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:38.506783  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:38.536076  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:38.536104  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:41.112032  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:41.122203  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:41.122272  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:41.147769  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:41.147833  319301 cri.go:96] found id: ""
	I1227 20:13:41.147858  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:41.147945  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:41.151581  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:41.151651  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:41.176060  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:41.176078  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:41.176082  319301 cri.go:96] found id: ""
	I1227 20:13:41.176090  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:41.176144  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:41.179877  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:41.183247  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:41.183311  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:41.212692  319301 cri.go:96] found id: ""
	I1227 20:13:41.212717  319301 logs.go:282] 0 containers: []
	W1227 20:13:41.212727  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:41.212733  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:41.212814  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:41.237313  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:41.237335  319301 cri.go:96] found id: ""
	I1227 20:13:41.237343  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:41.237429  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:41.241432  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:41.241552  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:41.274168  319301 cri.go:96] found id: ""
	I1227 20:13:41.274196  319301 logs.go:282] 0 containers: []
	W1227 20:13:41.274206  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:41.274212  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:41.274295  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:41.300597  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:41.300620  319301 cri.go:96] found id: ""
	I1227 20:13:41.300628  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:41.300702  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:41.304360  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:41.304466  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:41.330795  319301 cri.go:96] found id: ""
	I1227 20:13:41.330819  319301 logs.go:282] 0 containers: []
	W1227 20:13:41.330828  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:41.330860  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:41.330885  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:41.358931  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:41.358960  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:41.383514  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:41.383539  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:41.469734  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:41.469771  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:41.573372  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:41.573411  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:41.591886  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:41.591916  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:41.674483  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:41.665884    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.666635    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.667427    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.669130    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.669864    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:41.665884    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.666635    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.667427    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.669130    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.669864    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:41.674507  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:41.674521  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:41.756704  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:41.756741  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:41.803676  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:41.803709  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:41.838752  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:41.838785  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:44.371993  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:44.382732  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:44.382811  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:44.408302  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:44.408324  319301 cri.go:96] found id: ""
	I1227 20:13:44.408332  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:44.408387  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:44.411908  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:44.411977  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:44.438505  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:44.438537  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:44.438543  319301 cri.go:96] found id: ""
	I1227 20:13:44.438551  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:44.438612  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:44.443020  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:44.446843  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:44.446907  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:44.473249  319301 cri.go:96] found id: ""
	I1227 20:13:44.473273  319301 logs.go:282] 0 containers: []
	W1227 20:13:44.473282  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:44.473288  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:44.473344  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:44.506635  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:44.506657  319301 cri.go:96] found id: ""
	I1227 20:13:44.506665  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:44.506719  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:44.510255  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:44.510327  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:44.535681  319301 cri.go:96] found id: ""
	I1227 20:13:44.535706  319301 logs.go:282] 0 containers: []
	W1227 20:13:44.535715  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:44.535722  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:44.535779  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:44.566431  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:44.566454  319301 cri.go:96] found id: ""
	I1227 20:13:44.566463  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:44.566544  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:44.570308  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:44.570429  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:44.596900  319301 cri.go:96] found id: ""
	I1227 20:13:44.596925  319301 logs.go:282] 0 containers: []
	W1227 20:13:44.596935  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:44.596969  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:44.596988  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:44.641306  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:44.641338  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:44.670860  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:44.670887  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:44.698228  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:44.698303  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:44.781609  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:44.781645  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:44.832828  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:44.832857  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:44.851403  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:44.851434  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:44.883766  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:44.883796  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:44.982715  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:44.982754  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:45.102278  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:45.090748   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.091715   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.092803   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.093981   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.094942   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:45.090748   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.091715   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.092803   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.093981   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.094942   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:45.102308  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:45.102333  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:47.711741  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:47.722289  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:47.722355  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:47.752456  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:47.752475  319301 cri.go:96] found id: ""
	I1227 20:13:47.752483  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:47.752545  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:47.756223  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:47.756290  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:47.781994  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:47.782016  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:47.782021  319301 cri.go:96] found id: ""
	I1227 20:13:47.782029  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:47.782082  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:47.785803  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:47.789134  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:47.789202  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:47.819133  319301 cri.go:96] found id: ""
	I1227 20:13:47.819166  319301 logs.go:282] 0 containers: []
	W1227 20:13:47.819176  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:47.819188  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:47.819261  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:47.848513  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:47.848534  319301 cri.go:96] found id: ""
	I1227 20:13:47.848542  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:47.848602  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:47.852477  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:47.852545  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:47.879163  319301 cri.go:96] found id: ""
	I1227 20:13:47.879188  319301 logs.go:282] 0 containers: []
	W1227 20:13:47.879198  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:47.879204  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:47.879288  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:47.906400  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:47.906422  319301 cri.go:96] found id: ""
	I1227 20:13:47.906430  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:47.906487  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:47.910061  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:47.910142  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:47.936751  319301 cri.go:96] found id: ""
	I1227 20:13:47.936822  319301 logs.go:282] 0 containers: []
	W1227 20:13:47.936855  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:47.936885  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:47.936928  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:48.041904  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:48.041941  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:48.059753  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:48.059783  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:48.091794  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:48.091825  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:48.119314  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:48.119341  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:48.167631  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:48.167656  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:48.236954  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:48.226933   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.228070   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.229057   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.230849   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.231433   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:48.226933   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.228070   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.229057   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.230849   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.231433   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:48.236978  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:48.236992  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:48.266604  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:48.266634  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:48.326691  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:48.326727  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:48.370030  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:48.370062  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:50.950604  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:50.960973  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:50.961044  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:50.989711  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:50.989734  319301 cri.go:96] found id: ""
	I1227 20:13:50.989743  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:50.989813  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:50.993765  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:50.993882  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:51.024930  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:51.024955  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:51.024976  319301 cri.go:96] found id: ""
	I1227 20:13:51.025000  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:51.025060  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:51.029133  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:51.034041  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:51.034136  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:51.061567  319301 cri.go:96] found id: ""
	I1227 20:13:51.061590  319301 logs.go:282] 0 containers: []
	W1227 20:13:51.061599  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:51.061608  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:51.061673  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:51.090737  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:51.090764  319301 cri.go:96] found id: ""
	I1227 20:13:51.090773  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:51.090847  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:51.095345  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:51.095432  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:51.123208  319301 cri.go:96] found id: ""
	I1227 20:13:51.123244  319301 logs.go:282] 0 containers: []
	W1227 20:13:51.123254  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:51.123260  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:51.123334  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:51.154295  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:51.154317  319301 cri.go:96] found id: ""
	I1227 20:13:51.154325  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:51.154407  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:51.158410  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:51.158485  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:51.189846  319301 cri.go:96] found id: ""
	I1227 20:13:51.189882  319301 logs.go:282] 0 containers: []
	W1227 20:13:51.189896  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:51.189909  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:51.189921  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:51.286819  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:51.286858  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:51.305366  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:51.305393  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:51.380305  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:51.380343  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:51.441677  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:51.441710  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:51.481914  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:51.481949  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:51.547090  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:51.539048   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.539678   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.541335   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.541928   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.543466   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:51.539048   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.539678   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.541335   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.541928   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.543466   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:51.547154  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:51.547176  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:51.578696  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:51.578725  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:51.608004  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:51.608032  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:51.636360  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:51.636391  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:54.212415  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:54.222852  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:54.222923  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:54.251561  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:54.251580  319301 cri.go:96] found id: ""
	I1227 20:13:54.251587  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:54.251645  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:54.255279  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:54.255354  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:54.292682  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:54.292706  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:54.292711  319301 cri.go:96] found id: ""
	I1227 20:13:54.292719  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:54.292781  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:54.296595  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:54.300085  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:54.300159  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:54.326489  319301 cri.go:96] found id: ""
	I1227 20:13:54.326555  319301 logs.go:282] 0 containers: []
	W1227 20:13:54.326579  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:54.326605  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:54.326696  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:54.353313  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:54.353338  319301 cri.go:96] found id: ""
	I1227 20:13:54.353347  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:54.353403  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:54.356927  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:54.356999  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:54.381581  319301 cri.go:96] found id: ""
	I1227 20:13:54.381617  319301 logs.go:282] 0 containers: []
	W1227 20:13:54.381626  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:54.381633  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:54.381691  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:54.414363  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:54.414383  319301 cri.go:96] found id: ""
	I1227 20:13:54.414391  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:54.414446  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:54.418045  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:54.418114  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:54.449206  319301 cri.go:96] found id: ""
	I1227 20:13:54.449229  319301 logs.go:282] 0 containers: []
	W1227 20:13:54.449238  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:54.449252  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:54.449264  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:54.517227  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:54.508584   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.509203   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.510795   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.511388   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.512826   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:54.508584   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.509203   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.510795   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.511388   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.512826   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:54.517253  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:54.517266  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:54.544360  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:54.544391  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:54.599513  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:54.599547  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:54.644818  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:54.644847  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:54.688568  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:54.688609  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:54.713724  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:54.713751  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:54.741842  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:54.741868  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:54.820175  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:54.820209  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:54.925045  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:54.925099  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:57.443738  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:57.454148  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:57.454219  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:57.484004  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:57.484071  319301 cri.go:96] found id: ""
	I1227 20:13:57.484087  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:57.484154  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:57.487937  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:57.488009  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:57.513954  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:57.513978  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:57.513983  319301 cri.go:96] found id: ""
	I1227 20:13:57.513991  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:57.514048  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:57.517734  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:57.521248  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:57.521322  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:57.548709  319301 cri.go:96] found id: ""
	I1227 20:13:57.548734  319301 logs.go:282] 0 containers: []
	W1227 20:13:57.548743  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:57.548749  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:57.548807  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:57.574830  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:57.574853  319301 cri.go:96] found id: ""
	I1227 20:13:57.574862  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:57.574919  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:57.578643  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:57.578770  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:57.604928  319301 cri.go:96] found id: ""
	I1227 20:13:57.604952  319301 logs.go:282] 0 containers: []
	W1227 20:13:57.604961  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:57.604967  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:57.605037  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:57.636096  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:57.636118  319301 cri.go:96] found id: ""
	I1227 20:13:57.636126  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:57.636181  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:57.640206  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:57.640289  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:57.664867  319301 cri.go:96] found id: ""
	I1227 20:13:57.664893  319301 logs.go:282] 0 containers: []
	W1227 20:13:57.664903  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:57.664918  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:57.664930  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:57.760571  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:57.760614  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:57.779034  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:57.779063  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:57.860979  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:57.853801   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.854291   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.855825   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.856219   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.857717   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:57.853801   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.854291   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.855825   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.856219   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.857717   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:57.861005  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:57.861030  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:57.891248  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:57.891279  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:57.951146  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:57.951184  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:57.983957  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:57.983983  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:58.027711  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:58.027751  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:58.057942  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:58.057967  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:58.134700  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:58.134737  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:00.665876  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:00.676353  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:00.676426  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:00.704251  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:00.704274  319301 cri.go:96] found id: ""
	I1227 20:14:00.704284  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:00.704369  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:00.708101  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:00.708172  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:00.744575  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:00.744598  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:00.744602  319301 cri.go:96] found id: ""
	I1227 20:14:00.744610  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:00.744681  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:00.748672  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:00.752393  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:00.752495  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:00.778438  319301 cri.go:96] found id: ""
	I1227 20:14:00.778463  319301 logs.go:282] 0 containers: []
	W1227 20:14:00.778472  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:00.778478  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:00.778568  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:00.804119  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:00.804143  319301 cri.go:96] found id: ""
	I1227 20:14:00.804152  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:00.804243  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:00.807914  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:00.808018  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:00.837548  319301 cri.go:96] found id: ""
	I1227 20:14:00.837626  319301 logs.go:282] 0 containers: []
	W1227 20:14:00.837640  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:00.837648  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:00.837723  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:00.864504  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:00.864527  319301 cri.go:96] found id: ""
	I1227 20:14:00.864535  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:00.864590  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:00.868408  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:00.868482  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:00.897150  319301 cri.go:96] found id: ""
	I1227 20:14:00.897173  319301 logs.go:282] 0 containers: []
	W1227 20:14:00.897182  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:00.897197  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:00.897210  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:00.998644  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:00.998688  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:01.021375  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:01.021415  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:01.054456  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:01.054487  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:01.115661  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:01.115700  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:01.161388  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:01.161423  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:01.192518  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:01.192549  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:01.275490  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:01.275523  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:01.341916  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:01.334014   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.334408   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.335994   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.336428   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.337960   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:01.334014   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.334408   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.335994   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.336428   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.337960   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:01.341937  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:01.341950  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:01.368174  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:01.368205  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:03.909559  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:03.920151  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:03.920223  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:03.950304  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:03.950321  319301 cri.go:96] found id: ""
	I1227 20:14:03.950329  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:03.950383  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:03.954284  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:03.954356  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:03.991836  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:03.991917  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:03.991937  319301 cri.go:96] found id: ""
	I1227 20:14:03.991960  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:03.992044  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:03.996532  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:04.000198  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:04.000315  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:04.031549  319301 cri.go:96] found id: ""
	I1227 20:14:04.031622  319301 logs.go:282] 0 containers: []
	W1227 20:14:04.031647  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:04.031671  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:04.031765  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:04.060260  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:04.060328  319301 cri.go:96] found id: ""
	I1227 20:14:04.060356  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:04.060444  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:04.064496  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:04.064588  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:04.102911  319301 cri.go:96] found id: ""
	I1227 20:14:04.103013  319301 logs.go:282] 0 containers: []
	W1227 20:14:04.103124  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:04.103169  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:04.103319  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:04.131147  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:04.131212  319301 cri.go:96] found id: ""
	I1227 20:14:04.131234  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:04.131327  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:04.135698  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:04.135819  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:04.164124  319301 cri.go:96] found id: ""
	I1227 20:14:04.164202  319301 logs.go:282] 0 containers: []
	W1227 20:14:04.164224  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:04.164266  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:04.164297  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:04.182491  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:04.182521  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:04.211036  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:04.211068  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:04.256784  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:04.256821  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:04.348299  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:04.348336  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:04.450573  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:04.450613  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:04.516283  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:04.506999   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.507835   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.510141   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.510856   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.512527   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:04.506999   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.507835   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.510141   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.510856   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.512527   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:04.516305  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:04.516319  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:04.576841  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:04.576872  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:04.614008  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:04.614035  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:04.641690  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:04.641719  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:07.176073  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:07.186712  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:07.186783  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:07.211686  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:07.211709  319301 cri.go:96] found id: ""
	I1227 20:14:07.211718  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:07.211775  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:07.215681  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:07.215756  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:07.240540  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:07.240563  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:07.240569  319301 cri.go:96] found id: ""
	I1227 20:14:07.240577  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:07.240630  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:07.245279  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:07.249179  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:07.249250  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:07.276774  319301 cri.go:96] found id: ""
	I1227 20:14:07.276800  319301 logs.go:282] 0 containers: []
	W1227 20:14:07.276810  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:07.276816  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:07.276873  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:07.304802  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:07.304821  319301 cri.go:96] found id: ""
	I1227 20:14:07.304829  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:07.304883  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:07.308534  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:07.308604  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:07.336318  319301 cri.go:96] found id: ""
	I1227 20:14:07.336344  319301 logs.go:282] 0 containers: []
	W1227 20:14:07.336354  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:07.336360  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:07.336423  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:07.362751  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:07.362771  319301 cri.go:96] found id: ""
	I1227 20:14:07.362780  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:07.362840  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:07.366846  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:07.366918  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:07.395130  319301 cri.go:96] found id: ""
	I1227 20:14:07.395152  319301 logs.go:282] 0 containers: []
	W1227 20:14:07.395161  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:07.395175  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:07.395187  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:07.491440  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:07.491518  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:07.527740  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:07.527770  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:07.558436  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:07.558464  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:07.588229  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:07.588259  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:07.607165  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:07.607197  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:07.677755  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:07.668928   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.669821   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.671526   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.672177   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.673864   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:07.668928   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.669821   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.671526   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.672177   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.673864   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:07.677777  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:07.677791  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:07.739114  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:07.739152  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:07.784369  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:07.784406  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:07.810544  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:07.810571  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:10.388063  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:10.398699  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:10.398769  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:10.429540  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:10.429607  319301 cri.go:96] found id: ""
	I1227 20:14:10.429631  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:10.429721  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:10.433534  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:10.433651  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:10.459275  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:10.459297  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:10.459303  319301 cri.go:96] found id: ""
	I1227 20:14:10.459310  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:10.459366  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:10.463124  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:10.466705  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:10.466798  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:10.492126  319301 cri.go:96] found id: ""
	I1227 20:14:10.492155  319301 logs.go:282] 0 containers: []
	W1227 20:14:10.492173  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:10.492184  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:10.492242  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:10.518226  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:10.518248  319301 cri.go:96] found id: ""
	I1227 20:14:10.518256  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:10.518364  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:10.522989  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:10.523096  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:10.549695  319301 cri.go:96] found id: ""
	I1227 20:14:10.549722  319301 logs.go:282] 0 containers: []
	W1227 20:14:10.549732  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:10.549738  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:10.549798  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:10.579366  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:10.579390  319301 cri.go:96] found id: ""
	I1227 20:14:10.579398  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:10.579455  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:10.583638  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:10.583714  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:10.615082  319301 cri.go:96] found id: ""
	I1227 20:14:10.615105  319301 logs.go:282] 0 containers: []
	W1227 20:14:10.615113  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:10.615130  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:10.615142  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:10.683394  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:10.674472   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.675801   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.676387   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.678136   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.678634   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:10.674472   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.675801   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.676387   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.678136   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.678634   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:10.683412  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:10.683425  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:10.727898  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:10.727931  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:10.753009  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:10.753042  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:10.782677  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:10.782703  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:10.866110  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:10.866147  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:10.959413  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:10.959452  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:10.977909  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:10.977941  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:11.005943  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:11.005969  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:11.074309  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:11.074346  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:13.614417  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:13.625578  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:13.625646  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:13.652507  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:13.652525  319301 cri.go:96] found id: ""
	I1227 20:14:13.652534  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:13.652588  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:13.656545  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:13.656609  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:13.683073  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:13.683097  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:13.683102  319301 cri.go:96] found id: ""
	I1227 20:14:13.683110  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:13.683166  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:13.686968  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:13.690405  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:13.690466  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:13.717840  319301 cri.go:96] found id: ""
	I1227 20:14:13.717864  319301 logs.go:282] 0 containers: []
	W1227 20:14:13.717873  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:13.717879  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:13.717938  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:13.746028  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:13.746049  319301 cri.go:96] found id: ""
	I1227 20:14:13.746058  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:13.746117  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:13.749660  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:13.749741  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:13.775234  319301 cri.go:96] found id: ""
	I1227 20:14:13.775301  319301 logs.go:282] 0 containers: []
	W1227 20:14:13.775322  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:13.775330  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:13.775388  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:13.800618  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:13.800642  319301 cri.go:96] found id: ""
	I1227 20:14:13.800650  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:13.800708  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:13.804545  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:13.804619  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:13.832761  319301 cri.go:96] found id: ""
	I1227 20:14:13.832786  319301 logs.go:282] 0 containers: []
	W1227 20:14:13.832795  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:13.832811  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:13.832824  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:13.851133  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:13.851163  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:13.926603  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:13.926681  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:13.961517  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:13.961544  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:14.069694  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:14.069739  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:14.151483  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:14.142577   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.143391   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.145037   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.145551   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.147508   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:14.142577   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.143391   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.145037   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.145551   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.147508   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:14.151505  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:14.151520  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:14.181727  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:14.181758  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:14.240301  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:14.240339  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:14.300709  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:14.300743  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:14.336466  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:14.336498  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:16.865634  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:16.876358  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:16.876432  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:16.904188  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:16.904253  319301 cri.go:96] found id: ""
	I1227 20:14:16.904276  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:16.904367  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:16.908220  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:16.908322  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:16.937896  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:16.937919  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:16.937924  319301 cri.go:96] found id: ""
	I1227 20:14:16.937932  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:16.937986  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:16.942670  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:16.946301  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:16.946387  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:16.985586  319301 cri.go:96] found id: ""
	I1227 20:14:16.985609  319301 logs.go:282] 0 containers: []
	W1227 20:14:16.985618  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:16.985624  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:16.985683  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:17.013996  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:17.014029  319301 cri.go:96] found id: ""
	I1227 20:14:17.014039  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:17.014137  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:17.018935  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:17.019008  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:17.052484  319301 cri.go:96] found id: ""
	I1227 20:14:17.052561  319301 logs.go:282] 0 containers: []
	W1227 20:14:17.052583  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:17.052604  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:17.052695  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:17.081622  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:17.081695  319301 cri.go:96] found id: ""
	I1227 20:14:17.081718  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:17.081788  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:17.085690  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:17.085794  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:17.112049  319301 cri.go:96] found id: ""
	I1227 20:14:17.112074  319301 logs.go:282] 0 containers: []
	W1227 20:14:17.112082  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:17.112098  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:17.112141  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:17.137714  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:17.137743  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:17.213490  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:17.213533  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:17.246326  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:17.246356  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:17.328320  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:17.320845   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.321569   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.322897   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.323352   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.324795   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:17.320845   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.321569   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.322897   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.323352   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.324795   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:17.328340  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:17.328353  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:17.385541  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:17.385578  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:17.427419  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:17.427449  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:17.452174  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:17.452206  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:17.546685  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:17.546724  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:17.565295  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:17.565332  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:20.098978  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:20.111051  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:20.111126  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:20.137851  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:20.137927  319301 cri.go:96] found id: ""
	I1227 20:14:20.137963  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:20.138055  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:20.142900  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:20.143001  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:20.170010  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:20.170087  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:20.170109  319301 cri.go:96] found id: ""
	I1227 20:14:20.170137  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:20.170221  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:20.175063  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:20.178747  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:20.178824  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:20.206381  319301 cri.go:96] found id: ""
	I1227 20:14:20.206409  319301 logs.go:282] 0 containers: []
	W1227 20:14:20.206418  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:20.206425  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:20.206485  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:20.233473  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:20.233499  319301 cri.go:96] found id: ""
	I1227 20:14:20.233508  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:20.233571  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:20.237997  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:20.238070  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:20.262995  319301 cri.go:96] found id: ""
	I1227 20:14:20.263067  319301 logs.go:282] 0 containers: []
	W1227 20:14:20.263092  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:20.263099  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:20.263170  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:20.288462  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:20.288537  319301 cri.go:96] found id: ""
	I1227 20:14:20.288566  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:20.288647  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:20.292436  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:20.292550  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:20.322573  319301 cri.go:96] found id: ""
	I1227 20:14:20.322596  319301 logs.go:282] 0 containers: []
	W1227 20:14:20.322605  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:20.322621  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:20.322633  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:20.432211  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:20.432245  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:20.496754  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:20.496791  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:20.540278  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:20.540351  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:20.567122  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:20.567152  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:20.648855  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:20.648895  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:20.667153  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:20.667185  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:20.736076  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:20.727815   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.728362   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.730119   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.730829   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.732497   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:20.727815   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.728362   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.730119   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.730829   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.732497   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:20.736098  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:20.736112  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:20.762277  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:20.762304  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:20.800871  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:20.800901  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:23.331772  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:23.342153  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:23.342227  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:23.367402  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:23.367424  319301 cri.go:96] found id: ""
	I1227 20:14:23.367433  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:23.367489  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:23.371067  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:23.371137  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:23.397005  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:23.397081  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:23.397101  319301 cri.go:96] found id: ""
	I1227 20:14:23.397127  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:23.397212  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:23.401002  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:23.404386  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:23.404490  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:23.430285  319301 cri.go:96] found id: ""
	I1227 20:14:23.430309  319301 logs.go:282] 0 containers: []
	W1227 20:14:23.430318  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:23.430326  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:23.430383  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:23.461494  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:23.461517  319301 cri.go:96] found id: ""
	I1227 20:14:23.461526  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:23.461578  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:23.465337  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:23.465409  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:23.496783  319301 cri.go:96] found id: ""
	I1227 20:14:23.496808  319301 logs.go:282] 0 containers: []
	W1227 20:14:23.496818  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:23.496824  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:23.496881  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:23.522580  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:23.522602  319301 cri.go:96] found id: ""
	I1227 20:14:23.522610  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:23.522665  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:23.526436  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:23.526519  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:23.557267  319301 cri.go:96] found id: ""
	I1227 20:14:23.557299  319301 logs.go:282] 0 containers: []
	W1227 20:14:23.557309  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:23.557325  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:23.557336  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:23.584981  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:23.585010  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:23.648213  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:23.648252  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:23.695771  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:23.695847  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:23.726135  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:23.726165  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:23.810400  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:23.810440  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:23.916410  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:23.916451  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:23.945753  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:23.945825  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:23.996874  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:23.996903  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:24.015806  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:24.015853  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:24.093634  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:24.083702   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.084655   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.086499   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.086863   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.088426   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:24.083702   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.084655   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.086499   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.086863   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.088426   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:26.595192  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:26.607312  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:26.607388  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:26.644526  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:26.644546  319301 cri.go:96] found id: ""
	I1227 20:14:26.644554  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:26.644613  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:26.648515  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:26.648588  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:26.674360  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:26.674383  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:26.674387  319301 cri.go:96] found id: ""
	I1227 20:14:26.674395  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:26.674451  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:26.678114  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:26.681548  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:26.681619  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:26.707823  319301 cri.go:96] found id: ""
	I1227 20:14:26.707847  319301 logs.go:282] 0 containers: []
	W1227 20:14:26.707856  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:26.707863  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:26.707918  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:26.736808  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:26.736830  319301 cri.go:96] found id: ""
	I1227 20:14:26.736839  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:26.736910  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:26.740449  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:26.740516  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:26.767979  319301 cri.go:96] found id: ""
	I1227 20:14:26.768005  319301 logs.go:282] 0 containers: []
	W1227 20:14:26.768014  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:26.768020  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:26.768093  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:26.794399  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:26.794419  319301 cri.go:96] found id: ""
	I1227 20:14:26.794428  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:26.794482  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:26.798158  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:26.798242  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:26.822859  319301 cri.go:96] found id: ""
	I1227 20:14:26.822883  319301 logs.go:282] 0 containers: []
	W1227 20:14:26.822893  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:26.822924  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:26.822946  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:26.868214  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:26.868238  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:26.932994  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:26.933029  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:26.977303  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:26.977340  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:27.068000  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:27.068040  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:27.171536  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:27.171574  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:27.190535  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:27.190562  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:27.216736  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:27.216762  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:27.243411  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:27.243439  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:27.295099  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:27.295126  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:27.357878  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:27.350559   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.350955   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.352482   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.352824   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.354320   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:27.350559   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.350955   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.352482   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.352824   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.354320   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:29.858681  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:29.868776  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:29.868844  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:29.896575  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:29.896597  319301 cri.go:96] found id: ""
	I1227 20:14:29.896605  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:29.896686  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:29.900141  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:29.900230  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:29.933885  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:29.933909  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:29.933915  319301 cri.go:96] found id: ""
	I1227 20:14:29.933922  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:29.933995  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:29.937419  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:29.940597  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:29.940661  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:29.985795  319301 cri.go:96] found id: ""
	I1227 20:14:29.985826  319301 logs.go:282] 0 containers: []
	W1227 20:14:29.985836  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:29.985843  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:29.985919  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:30.025679  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:30.025700  319301 cri.go:96] found id: ""
	I1227 20:14:30.025709  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:30.025777  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:30.049697  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:30.049787  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:30.082890  319301 cri.go:96] found id: ""
	I1227 20:14:30.082916  319301 logs.go:282] 0 containers: []
	W1227 20:14:30.082926  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:30.082934  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:30.083006  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:30.119124  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:30.119148  319301 cri.go:96] found id: ""
	I1227 20:14:30.119156  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:30.119217  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:30.123169  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:30.123244  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:30.151766  319301 cri.go:96] found id: ""
	I1227 20:14:30.151790  319301 logs.go:282] 0 containers: []
	W1227 20:14:30.151799  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:30.151816  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:30.151828  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:30.169326  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:30.169357  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:30.199380  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:30.199412  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:30.265121  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:30.265163  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:30.356459  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:30.356498  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:30.392984  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:30.393013  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:30.499474  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:30.499511  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:30.571342  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:30.561186   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.563435   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.564195   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.566014   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.566655   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:30.561186   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.563435   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.564195   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.566014   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.566655   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:30.571365  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:30.571378  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:30.615172  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:30.615207  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:30.644774  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:30.644803  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:33.172504  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:33.183855  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:33.183927  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:33.214210  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:33.214232  319301 cri.go:96] found id: ""
	I1227 20:14:33.214241  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:33.214307  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:33.218161  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:33.218245  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:33.244477  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:33.244501  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:33.244506  319301 cri.go:96] found id: ""
	I1227 20:14:33.244513  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:33.244574  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:33.248725  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:33.252096  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:33.252166  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:33.284273  319301 cri.go:96] found id: ""
	I1227 20:14:33.284304  319301 logs.go:282] 0 containers: []
	W1227 20:14:33.284317  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:33.284327  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:33.284406  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:33.311094  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:33.311117  319301 cri.go:96] found id: ""
	I1227 20:14:33.311125  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:33.311184  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:33.315375  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:33.315450  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:33.344846  319301 cri.go:96] found id: ""
	I1227 20:14:33.344870  319301 logs.go:282] 0 containers: []
	W1227 20:14:33.344879  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:33.344886  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:33.344945  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:33.370949  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:33.371011  319301 cri.go:96] found id: ""
	I1227 20:14:33.371033  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:33.371093  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:33.375136  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:33.375211  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:33.403339  319301 cri.go:96] found id: ""
	I1227 20:14:33.403361  319301 logs.go:282] 0 containers: []
	W1227 20:14:33.403370  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:33.403385  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:33.403396  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:33.484170  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:33.484207  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:33.516735  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:33.516766  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:33.534421  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:33.534452  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:33.613759  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:33.613800  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:33.651422  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:33.651450  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:33.759905  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:33.759949  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:33.827184  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:33.819142   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.819867   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.821423   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.822059   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.823552   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:33.819142   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.819867   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.821423   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.822059   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.823552   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:33.827217  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:33.827232  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:33.858891  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:33.858926  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:33.904092  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:33.904128  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:36.431294  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:36.449106  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:36.449178  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:36.480392  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:36.480416  319301 cri.go:96] found id: ""
	I1227 20:14:36.480425  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:36.480481  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:36.485341  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:36.485424  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:36.515111  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:36.515185  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:36.515199  319301 cri.go:96] found id: ""
	I1227 20:14:36.515225  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:36.515283  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:36.519737  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:36.523801  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:36.523877  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:36.550603  319301 cri.go:96] found id: ""
	I1227 20:14:36.550628  319301 logs.go:282] 0 containers: []
	W1227 20:14:36.550637  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:36.550644  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:36.550699  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:36.586466  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:36.586492  319301 cri.go:96] found id: ""
	I1227 20:14:36.586500  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:36.586577  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:36.590067  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:36.590139  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:36.621202  319301 cri.go:96] found id: ""
	I1227 20:14:36.621235  319301 logs.go:282] 0 containers: []
	W1227 20:14:36.621244  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:36.621250  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:36.621308  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:36.647269  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:36.647292  319301 cri.go:96] found id: ""
	I1227 20:14:36.647301  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:36.647379  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:36.651085  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:36.651160  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:36.677749  319301 cri.go:96] found id: ""
	I1227 20:14:36.677778  319301 logs.go:282] 0 containers: []
	W1227 20:14:36.677788  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:36.677804  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:36.677817  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:36.725080  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:36.725110  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:36.755181  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:36.755211  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:36.784468  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:36.784496  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:36.816908  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:36.816940  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:36.834015  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:36.834047  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:36.900869  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:36.892648   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.893851   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.894994   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.895421   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.896907   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:36.892648   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.893851   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.894994   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.895421   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.896907   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:36.900892  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:36.900908  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:36.960391  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:36.960427  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:37.045275  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:37.045325  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:37.148150  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:37.148188  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:39.676095  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:39.686901  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:39.686981  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:39.713632  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:39.713662  319301 cri.go:96] found id: ""
	I1227 20:14:39.713681  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:39.713758  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:39.717685  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:39.717762  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:39.744240  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:39.744264  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:39.744269  319301 cri.go:96] found id: ""
	I1227 20:14:39.744277  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:39.744330  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:39.748168  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:39.751671  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:39.751770  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:39.781268  319301 cri.go:96] found id: ""
	I1227 20:14:39.781293  319301 logs.go:282] 0 containers: []
	W1227 20:14:39.781302  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:39.781309  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:39.781401  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:39.810785  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:39.810807  319301 cri.go:96] found id: ""
	I1227 20:14:39.810815  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:39.810888  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:39.814715  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:39.814784  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:39.841437  319301 cri.go:96] found id: ""
	I1227 20:14:39.841493  319301 logs.go:282] 0 containers: []
	W1227 20:14:39.841503  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:39.841508  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:39.841573  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:39.868907  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:39.868925  319301 cri.go:96] found id: ""
	I1227 20:14:39.868933  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:39.868987  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:39.872674  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:39.872744  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:39.900867  319301 cri.go:96] found id: ""
	I1227 20:14:39.900943  319301 logs.go:282] 0 containers: []
	W1227 20:14:39.900966  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:39.901013  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:39.901043  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:39.918593  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:39.918625  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:39.949056  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:39.949087  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:39.981788  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:39.981818  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:40.105238  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:40.105377  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:40.191666  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:40.183905   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.184449   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.186006   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.186447   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.187950   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:40.183905   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.184449   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.186006   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.186447   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.187950   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:40.191684  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:40.191701  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:40.262140  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:40.262180  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:40.310808  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:40.310845  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:40.337783  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:40.337811  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:40.368704  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:40.368733  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:42.951291  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:42.961621  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:42.961714  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:42.996358  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:42.996382  319301 cri.go:96] found id: ""
	I1227 20:14:42.996391  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:42.996476  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:43.000167  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:43.000258  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:43.042517  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:43.042542  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:43.042547  319301 cri.go:96] found id: ""
	I1227 20:14:43.042555  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:43.042636  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:43.046498  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:43.049992  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:43.050069  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:43.076653  319301 cri.go:96] found id: ""
	I1227 20:14:43.076681  319301 logs.go:282] 0 containers: []
	W1227 20:14:43.076690  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:43.076697  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:43.076755  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:43.104355  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:43.104379  319301 cri.go:96] found id: ""
	I1227 20:14:43.104388  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:43.104444  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:43.108064  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:43.108137  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:43.136746  319301 cri.go:96] found id: ""
	I1227 20:14:43.136771  319301 logs.go:282] 0 containers: []
	W1227 20:14:43.136780  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:43.136786  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:43.136856  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:43.167333  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:43.167354  319301 cri.go:96] found id: ""
	I1227 20:14:43.167362  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:43.167417  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:43.171054  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:43.171167  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:43.196510  319301 cri.go:96] found id: ""
	I1227 20:14:43.196539  319301 logs.go:282] 0 containers: []
	W1227 20:14:43.196548  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:43.196562  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:43.196573  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:43.246188  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:43.246222  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:43.280060  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:43.280088  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:43.364679  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:43.364718  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:43.383405  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:43.383434  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:43.412457  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:43.412484  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:43.441225  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:43.441251  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:43.483277  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:43.483305  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:43.587381  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:43.587418  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:43.657966  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:43.648616   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.649341   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.651243   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.652029   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.653574   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:43.648616   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.649341   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.651243   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.652029   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.653574   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:43.657996  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:43.658011  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:46.217780  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:46.229546  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:46.229622  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:46.255054  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:46.255074  319301 cri.go:96] found id: ""
	I1227 20:14:46.255082  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:46.255135  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:46.258848  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:46.258946  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:46.292684  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:46.292758  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:46.292778  319301 cri.go:96] found id: ""
	I1227 20:14:46.292803  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:46.292889  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:46.296621  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:46.300035  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:46.300104  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:46.325669  319301 cri.go:96] found id: ""
	I1227 20:14:46.325694  319301 logs.go:282] 0 containers: []
	W1227 20:14:46.325703  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:46.325709  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:46.325766  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:46.352094  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:46.352159  319301 cri.go:96] found id: ""
	I1227 20:14:46.352182  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:46.352268  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:46.355963  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:46.356077  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:46.381620  319301 cri.go:96] found id: ""
	I1227 20:14:46.381646  319301 logs.go:282] 0 containers: []
	W1227 20:14:46.381656  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:46.381662  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:46.381738  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:46.410104  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:46.410127  319301 cri.go:96] found id: ""
	I1227 20:14:46.410135  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:46.410191  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:46.413648  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:46.413715  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:46.440709  319301 cri.go:96] found id: ""
	I1227 20:14:46.440734  319301 logs.go:282] 0 containers: []
	W1227 20:14:46.440745  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:46.440759  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:46.440781  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:46.469916  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:46.469945  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:46.571819  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:46.571854  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:46.590503  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:46.590531  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:46.624094  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:46.624120  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:46.655415  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:46.655444  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:46.727967  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:46.719794   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.720498   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.722193   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.722714   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.724244   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:46.719794   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.720498   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.722193   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.722714   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.724244   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:46.727989  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:46.728003  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:46.787862  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:46.787899  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:46.848761  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:46.848797  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:46.883658  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:46.883687  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:49.466063  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:49.476365  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:49.476460  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:49.502643  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:49.502665  319301 cri.go:96] found id: ""
	I1227 20:14:49.502673  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:49.502727  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:49.506369  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:49.506443  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:49.532399  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:49.532421  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:49.532427  319301 cri.go:96] found id: ""
	I1227 20:14:49.532435  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:49.532488  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:49.536133  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:49.539580  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:49.539645  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:49.566501  319301 cri.go:96] found id: ""
	I1227 20:14:49.566528  319301 logs.go:282] 0 containers: []
	W1227 20:14:49.566537  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:49.566544  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:49.566605  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:49.602221  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:49.602245  319301 cri.go:96] found id: ""
	I1227 20:14:49.602254  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:49.602316  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:49.606305  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:49.606375  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:49.632906  319301 cri.go:96] found id: ""
	I1227 20:14:49.632931  319301 logs.go:282] 0 containers: []
	W1227 20:14:49.632941  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:49.632946  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:49.633012  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:49.660593  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:49.660616  319301 cri.go:96] found id: ""
	I1227 20:14:49.660625  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:49.660683  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:49.664343  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:49.664414  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:49.691030  319301 cri.go:96] found id: ""
	I1227 20:14:49.691093  319301 logs.go:282] 0 containers: []
	W1227 20:14:49.691110  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:49.691125  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:49.691137  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:49.786516  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:49.786552  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:49.837581  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:49.837615  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:49.923089  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:49.923126  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:49.964776  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:49.964806  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:49.984138  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:49.984166  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:50.053988  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:50.045799   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.046531   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.048064   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.048564   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.050125   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:50.045799   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.046531   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.048064   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.048564   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.050125   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:50.054052  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:50.054072  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:50.080753  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:50.080847  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:50.160335  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:50.160373  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:50.189801  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:50.189831  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:52.722382  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:52.732860  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:52.732954  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:52.759105  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:52.759129  319301 cri.go:96] found id: ""
	I1227 20:14:52.759140  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:52.759192  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:52.763086  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:52.763152  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:52.789342  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:52.789365  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:52.789370  319301 cri.go:96] found id: ""
	I1227 20:14:52.789378  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:52.789441  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:52.793045  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:52.796599  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:52.796677  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:52.821951  319301 cri.go:96] found id: ""
	I1227 20:14:52.821975  319301 logs.go:282] 0 containers: []
	W1227 20:14:52.821984  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:52.821990  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:52.822048  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:52.848207  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:52.848227  319301 cri.go:96] found id: ""
	I1227 20:14:52.848235  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:52.848290  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:52.852016  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:52.852114  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:52.878718  319301 cri.go:96] found id: ""
	I1227 20:14:52.878752  319301 logs.go:282] 0 containers: []
	W1227 20:14:52.878761  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:52.878768  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:52.878826  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:52.905928  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:52.906001  319301 cri.go:96] found id: ""
	I1227 20:14:52.906023  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:52.906113  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:52.910178  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:52.910250  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:52.937172  319301 cri.go:96] found id: ""
	I1227 20:14:52.937209  319301 logs.go:282] 0 containers: []
	W1227 20:14:52.937218  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:52.937231  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:52.937249  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:52.966131  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:52.966162  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:53.003464  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:53.003490  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:53.021719  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:53.021777  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:53.091033  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:53.081906   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.083382   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.084066   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.085728   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.086021   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:53.081906   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.083382   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.084066   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.085728   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.086021   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:53.091054  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:53.091067  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:53.153878  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:53.153918  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:53.184615  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:53.184643  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:53.268968  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:53.269005  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:53.374253  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:53.374287  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:53.403008  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:53.403044  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:55.952353  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:55.962631  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:55.962719  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:55.995078  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:55.995100  319301 cri.go:96] found id: ""
	I1227 20:14:55.995108  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:55.995174  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:55.999787  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:55.999857  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:56.034785  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:56.034809  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:56.034814  319301 cri.go:96] found id: ""
	I1227 20:14:56.034821  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:56.034886  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:56.039026  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:56.043109  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:56.043239  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:56.076322  319301 cri.go:96] found id: ""
	I1227 20:14:56.076349  319301 logs.go:282] 0 containers: []
	W1227 20:14:56.076358  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:56.076365  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:56.076450  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:56.105910  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:56.105937  319301 cri.go:96] found id: ""
	I1227 20:14:56.105945  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:56.106024  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:56.109833  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:56.109951  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:56.136658  319301 cri.go:96] found id: ""
	I1227 20:14:56.136681  319301 logs.go:282] 0 containers: []
	W1227 20:14:56.136690  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:56.136696  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:56.136751  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:56.162379  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:56.162402  319301 cri.go:96] found id: ""
	I1227 20:14:56.162409  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:56.162464  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:56.165959  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:56.166030  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:56.193023  319301 cri.go:96] found id: ""
	I1227 20:14:56.193057  319301 logs.go:282] 0 containers: []
	W1227 20:14:56.193066  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:56.193097  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:56.193131  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:56.219549  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:56.219577  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:56.255190  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:56.255218  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:56.326655  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:56.326690  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:56.369967  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:56.370002  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:56.449778  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:56.449815  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:56.481804  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:56.481833  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:56.580473  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:56.580507  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:56.597748  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:56.597781  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:56.675164  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:56.667282   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.668004   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.669569   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.670031   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.671487   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:56.667282   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.668004   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.669569   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.670031   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.671487   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:56.675187  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:56.675210  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:59.204907  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:59.215384  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:59.215464  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:59.241010  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:59.241041  319301 cri.go:96] found id: ""
	I1227 20:14:59.241056  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:59.241157  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:59.245340  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:59.245433  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:59.282857  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:59.282880  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:59.282886  319301 cri.go:96] found id: ""
	I1227 20:14:59.282893  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:59.282945  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:59.286535  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:59.289810  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:59.289879  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:59.317473  319301 cri.go:96] found id: ""
	I1227 20:14:59.317509  319301 logs.go:282] 0 containers: []
	W1227 20:14:59.317517  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:59.317524  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:59.317593  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:59.350932  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:59.350952  319301 cri.go:96] found id: ""
	I1227 20:14:59.350961  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:59.351015  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:59.354698  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:59.354768  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:59.381626  319301 cri.go:96] found id: ""
	I1227 20:14:59.381660  319301 logs.go:282] 0 containers: []
	W1227 20:14:59.381669  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:59.381675  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:59.381730  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:59.408107  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:59.408130  319301 cri.go:96] found id: ""
	I1227 20:14:59.408140  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:59.408216  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:59.411771  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:59.411846  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:59.436633  319301 cri.go:96] found id: ""
	I1227 20:14:59.436660  319301 logs.go:282] 0 containers: []
	W1227 20:14:59.436669  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:59.436683  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:59.436695  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:59.532932  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:59.532968  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:59.601543  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:59.593318   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.594069   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.595883   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.596441   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.597498   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:59.593318   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.594069   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.595883   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.596441   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.597498   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:59.601573  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:59.601587  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:59.630627  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:59.630653  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:59.691462  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:59.691537  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:59.736271  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:59.736311  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:59.763317  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:59.763349  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:59.845478  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:59.845512  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:59.877233  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:59.877259  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:59.894077  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:59.894108  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:02.425928  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:15:02.437025  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:15:02.437097  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:15:02.462847  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:02.462876  319301 cri.go:96] found id: ""
	I1227 20:15:02.462885  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:15:02.462941  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:02.466840  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:15:02.466915  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:15:02.493867  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:02.493889  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:02.493895  319301 cri.go:96] found id: ""
	I1227 20:15:02.493903  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:15:02.493986  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:02.497849  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:02.501391  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:15:02.501500  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:15:02.531735  319301 cri.go:96] found id: ""
	I1227 20:15:02.531761  319301 logs.go:282] 0 containers: []
	W1227 20:15:02.531771  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:15:02.531779  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:15:02.531858  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:15:02.557699  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:02.557723  319301 cri.go:96] found id: ""
	I1227 20:15:02.557732  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:15:02.557792  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:02.561785  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:15:02.561860  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:15:02.588584  319301 cri.go:96] found id: ""
	I1227 20:15:02.588611  319301 logs.go:282] 0 containers: []
	W1227 20:15:02.588620  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:15:02.588665  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:15:02.588727  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:15:02.626246  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:02.626270  319301 cri.go:96] found id: ""
	I1227 20:15:02.626279  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:15:02.626332  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:02.630342  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:15:02.630416  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:15:02.658875  319301 cri.go:96] found id: ""
	I1227 20:15:02.658899  319301 logs.go:282] 0 containers: []
	W1227 20:15:02.658908  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:15:02.658940  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:15:02.658959  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:15:02.760567  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:15:02.760609  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:15:02.779705  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:15:02.779737  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:15:02.864780  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:15:02.844552   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.845307   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.847070   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.847814   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.850808   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:15:02.844552   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.845307   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.847070   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.847814   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.850808   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:15:02.864807  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:15:02.864822  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:02.930564  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:15:02.930600  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:02.956647  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:15:02.956674  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:02.988569  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:15:02.988644  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:15:03.080368  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:15:03.080404  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:03.109214  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:15:03.109254  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:03.154097  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:15:03.154130  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:15:05.702871  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:15:05.713737  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:15:05.713808  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:15:05.747061  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:05.747087  319301 cri.go:96] found id: ""
	I1227 20:15:05.747097  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:15:05.747152  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:05.751069  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:15:05.751142  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:15:05.778241  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:05.778264  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:05.778269  319301 cri.go:96] found id: ""
	I1227 20:15:05.778276  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:15:05.778330  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:05.781970  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:05.785615  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:15:05.785684  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:15:05.811372  319301 cri.go:96] found id: ""
	I1227 20:15:05.811405  319301 logs.go:282] 0 containers: []
	W1227 20:15:05.811419  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:15:05.811426  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:15:05.811487  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:15:05.837308  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:05.837331  319301 cri.go:96] found id: ""
	I1227 20:15:05.837339  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:15:05.837394  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:05.841435  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:15:05.841563  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:15:05.872145  319301 cri.go:96] found id: ""
	I1227 20:15:05.872175  319301 logs.go:282] 0 containers: []
	W1227 20:15:05.872184  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:15:05.872191  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:15:05.872248  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:15:05.905843  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:05.905863  319301 cri.go:96] found id: ""
	I1227 20:15:05.905872  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:15:05.905928  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:05.909362  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:15:05.909433  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:15:05.937743  319301 cri.go:96] found id: ""
	I1227 20:15:05.937768  319301 logs.go:282] 0 containers: []
	W1227 20:15:05.937776  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:15:05.937789  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:15:05.937805  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:15:05.956337  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:15:05.956373  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:06.027819  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:15:06.027857  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:06.055387  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:15:06.055417  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:15:06.087848  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:15:06.087876  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:15:06.191189  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:15:06.191225  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:15:06.260486  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:15:06.252420   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.253150   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.254651   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.255097   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.256545   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:15:06.252420   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.253150   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.254651   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.255097   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.256545   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:15:06.260512  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:15:06.260527  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:06.289045  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:15:06.289074  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:06.340456  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:15:06.340493  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:06.367177  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:15:06.367209  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:15:08.948368  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:15:08.960093  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:15:08.960163  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:15:09.004464  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:09.004531  319301 cri.go:96] found id: ""
	I1227 20:15:09.004541  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:15:09.004627  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.008790  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:15:09.008905  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:15:09.041635  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:09.041705  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:09.041727  319301 cri.go:96] found id: ""
	I1227 20:15:09.041750  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:15:09.041834  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.046563  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.050558  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:15:09.050679  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:15:09.079147  319301 cri.go:96] found id: ""
	I1227 20:15:09.079218  319301 logs.go:282] 0 containers: []
	W1227 20:15:09.079241  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:15:09.079265  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:15:09.079350  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:15:09.115659  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:09.115728  319301 cri.go:96] found id: ""
	I1227 20:15:09.115749  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:15:09.115833  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.119927  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:15:09.120060  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:15:09.148832  319301 cri.go:96] found id: ""
	I1227 20:15:09.148905  319301 logs.go:282] 0 containers: []
	W1227 20:15:09.148927  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:15:09.148951  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:15:09.149036  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:15:09.193967  319301 cri.go:96] found id: "d4599a49838601138827173ae16d1700bf9c506a4f9611f8f2415da1ea387070"
	I1227 20:15:09.194039  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:09.194058  319301 cri.go:96] found id: ""
	I1227 20:15:09.194083  319301 logs.go:282] 2 containers: [d4599a49838601138827173ae16d1700bf9c506a4f9611f8f2415da1ea387070 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:15:09.194168  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.198186  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.202291  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:15:09.202369  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:15:09.233220  319301 cri.go:96] found id: ""
	I1227 20:15:09.233256  319301 logs.go:282] 0 containers: []
	W1227 20:15:09.233266  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:15:09.233275  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:15:09.233286  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:15:09.265208  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:15:09.265236  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:15:09.366491  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:15:09.366527  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:15:09.385049  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:15:09.385152  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:09.416669  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:15:09.416697  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:09.477821  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:15:09.477862  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:09.503656  319301 logs.go:123] Gathering logs for kube-controller-manager [d4599a49838601138827173ae16d1700bf9c506a4f9611f8f2415da1ea387070] ...
	I1227 20:15:09.503682  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4599a49838601138827173ae16d1700bf9c506a4f9611f8f2415da1ea387070"
	I1227 20:15:09.529517  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:15:09.529549  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:15:09.594024  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:15:09.583997   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.584731   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.586847   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.587585   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.589403   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:15:09.583997   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.584731   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.586847   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.587585   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.589403   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:15:09.594044  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:15:09.594113  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:09.641021  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:15:09.641054  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:09.671469  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:15:09.671497  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:15:12.247384  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:15:12.261411  319301 out.go:203] 
	W1227 20:15:12.264240  319301 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1227 20:15:12.264279  319301 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1227 20:15:12.264291  319301 out.go:285] * Related issues:
	W1227 20:15:12.264307  319301 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1227 20:15:12.264322  319301 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1227 20:15:12.272645  319301 out.go:203] 
	
	
	==> CRI-O <==
	Dec 27 20:09:47 ha-422549 crio[668]: time="2025-12-27T20:09:47.963961573Z" level=info msg="Started container" PID=1443 containerID=810850466f08e002011f0d991e32eb0109be47db69714d6e333a070593589ffc description=kube-system/kube-controller-manager-ha-422549/kube-controller-manager id=4c2fe289-ef21-4410-b80d-903288016926 name=/runtime.v1.RuntimeService/StartContainer sandboxID=38efda04ee9aef0e7908e0db5c261b87e7e5100a62c84932b9b7ba0d61a4d0b2
	Dec 27 20:09:49 ha-422549 conmon[1210]: conmon b67722550482449b8daa <ninfo>: container 1212 exited with status 1
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.376459079Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=69065085-21ea-41c3-802a-261d89524c56 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.377242719Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1df6dc90-5ba0-4b74-852c-4cf7aefb23f0 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.378198249Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=cee7eb55-89b4-4b4e-840f-5adab55395f1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.378318031Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.390342199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.390574781Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d51da34059b2d7dc5c5989964247fd01aabd5fa31dd489fcbed003c93c5d0a79/merged/etc/passwd: no such file or directory"
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.390683445Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d51da34059b2d7dc5c5989964247fd01aabd5fa31dd489fcbed003c93c5d0a79/merged/etc/group: no such file or directory"
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.391133051Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.407049484Z" level=info msg="Created container 39052e86fac88d7cd6484a6d581397a09660e8626a668440758c42943ffc493c: kube-system/storage-provisioner/storage-provisioner" id=cee7eb55-89b4-4b4e-840f-5adab55395f1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.408066239Z" level=info msg="Starting container: 39052e86fac88d7cd6484a6d581397a09660e8626a668440758c42943ffc493c" id=a1f177fc-11ea-4dd9-a25c-b20aa52a0229 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.409701176Z" level=info msg="Started container" PID=1456 containerID=39052e86fac88d7cd6484a6d581397a09660e8626a668440758c42943ffc493c description=kube-system/storage-provisioner/storage-provisioner id=a1f177fc-11ea-4dd9-a25c-b20aa52a0229 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0df0f45f11cf21c22800d785af6947dd7131cfe5dea11e9e2d6c844bc352c0a
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.443600032Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.447069767Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.447101142Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.44712181Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.451793967Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.451824431Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.451847585Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.455975682Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.456009075Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.456031754Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.458926316Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.45895939Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	39052e86fac88       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Running             storage-provisioner       2                   c0df0f45f11cf       storage-provisioner                 kube-system
	810850466f08e       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   5 minutes ago       Running             kube-controller-manager   5                   38efda04ee9ae       kube-controller-manager-ha-422549   kube-system
	deb6daab23cec       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf   6 minutes ago       Running             coredns                   1                   72c204b703743       coredns-7d764666f9-n5d9d            kube-system
	43a1d9657d3c8       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf   6 minutes ago       Running             coredns                   1                   270010189bb39       coredns-7d764666f9-mf5xw            kube-system
	b677225504824       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   6 minutes ago       Exited              storage-provisioner       1                   c0df0f45f11cf       storage-provisioner                 kube-system
	10122e623612b       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   6 minutes ago       Running             busybox                   1                   b045d6d9411c4       busybox-769dd8b7dd-k7ks6            default
	790f2c013c89e       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   6 minutes ago       Running             kindnet-cni               1                   963cd2abb4546       kindnet-qkqmv                       kube-system
	0dc7fc3f72aac       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   6 minutes ago       Running             kube-proxy                1                   d7813942f329c       kube-proxy-mhmmn                    kube-system
	200f949dea5c6       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   6 minutes ago       Exited              kube-controller-manager   4                   38efda04ee9ae       kube-controller-manager-ha-422549   kube-system
	a2c772463ab69       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   6 minutes ago       Running             kube-apiserver            2                   8bfe137c6f9b3       kube-apiserver-ha-422549            kube-system
	c3f87ac29708d       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   7 minutes ago       Exited              kube-apiserver            1                   8bfe137c6f9b3       kube-apiserver-ha-422549            kube-system
	79f65bc2e1dbc       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   7 minutes ago       Running             etcd                      1                   f60298eb8266f       etcd-ha-422549                      kube-system
	dd811e752da4c       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   7 minutes ago       Running             kube-scheduler            1                   ce9729522201c       kube-scheduler-ha-422549            kube-system
	feeed30c26dbb       28c5662932f6032ee4faba083d9c2af90232797e1d4f89d9892cb92b26fec299   7 minutes ago       Running             kube-vip                  0                   1eca96f45960b       kube-vip-ha-422549                  kube-system
	
	
	==> coredns [43a1d9657d3c893603414e1fad6c7f34c4c4ed3f7f0f2369eb8490cc9ea240ec] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:47173 - 60767 "HINFO IN 8301766955164973522.8999772451794302158. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029591992s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> coredns [deb6daab23cece988ebd68d94f1237fabdfd9ad9729504264927da30e4c1b5a0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:35210 - 10149 "HINFO IN 5398190722329959175.7924831905691569149. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027114236s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-422549
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_03_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:03:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:15:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:11:30 +0000   Sat, 27 Dec 2025 20:03:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:11:30 +0000   Sat, 27 Dec 2025 20:03:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:11:30 +0000   Sat, 27 Dec 2025 20:03:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:11:30 +0000   Sat, 27 Dec 2025 20:09:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-422549
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                acd356f3-8732-454f-9ea5-4ebb90b80a04
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-k7ks6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7d764666f9-mf5xw             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 coredns-7d764666f9-n5d9d             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 etcd-ha-422549                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-qkqmv                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-422549             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-422549    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-mhmmn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-422549             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-422549                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  8m30s  node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  5m31s  node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	
	
	Name:               ha-422549-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_04_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:04:00 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:06:58 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 27 Dec 2025 20:06:47 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 27 Dec 2025 20:06:47 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 27 Dec 2025 20:06:47 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 27 Dec 2025 20:06:47 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-422549-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                279e934d-6d34-4a11-83f0-a7f36011d6a2
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-v6vks                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-422549-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-5wczs                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-422549-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-422549-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-nqr7h                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-422549-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-422549-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  8m30s  node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  5m31s  node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  NodeNotReady    4m41s  node-controller  Node ha-422549-m02 status is now: NodeNotReady
	
	
	Name:               ha-422549-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_04_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:04:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:06:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 27 Dec 2025 20:06:39 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 27 Dec 2025 20:06:39 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 27 Dec 2025 20:06:39 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 27 Dec 2025 20:06:39 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-422549-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                dd826b6d-21ec-45c4-b392-2d4b9b2daddb
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-qcz4b                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-422549-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-28svl                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-422549-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-422549-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-cg4z5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-422549-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-422549-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  8m30s  node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  5m31s  node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  NodeNotReady    4m41s  node-controller  Node ha-422549-m03 status is now: NodeNotReady
	
	
	Name:               ha-422549-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_05_33_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:05:32 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:06:44 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 27 Dec 2025 20:06:44 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 27 Dec 2025 20:06:44 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 27 Dec 2025 20:06:44 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 27 Dec 2025 20:06:44 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-422549-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                45c0e480-898e-46d5-83ce-c457d7b4b021
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4hl7v       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m48s
	  kube-system                 kube-proxy-kscg6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  9m46s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  9m46s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  9m44s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  8m30s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  5m31s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  NodeNotReady    4m41s  node-controller  Node ha-422549-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Dec27 19:24] overlayfs: idmapped layers are currently not supported
	[Dec27 19:25] overlayfs: idmapped layers are currently not supported
	[Dec27 19:26] overlayfs: idmapped layers are currently not supported
	[ +16.831724] overlayfs: idmapped layers are currently not supported
	[Dec27 19:27] overlayfs: idmapped layers are currently not supported
	[Dec27 19:28] overlayfs: idmapped layers are currently not supported
	[ +28.388596] overlayfs: idmapped layers are currently not supported
	[Dec27 19:29] overlayfs: idmapped layers are currently not supported
	[  +9.242530] overlayfs: idmapped layers are currently not supported
	[Dec27 19:30] overlayfs: idmapped layers are currently not supported
	[ +11.577339] overlayfs: idmapped layers are currently not supported
	[Dec27 19:32] overlayfs: idmapped layers are currently not supported
	[ +19.186532] overlayfs: idmapped layers are currently not supported
	[Dec27 19:34] overlayfs: idmapped layers are currently not supported
	[Dec27 19:54] kauditd_printk_skb: 8 callbacks suppressed
	[Dec27 19:56] overlayfs: idmapped layers are currently not supported
	[Dec27 19:59] overlayfs: idmapped layers are currently not supported
	[Dec27 20:00] overlayfs: idmapped layers are currently not supported
	[Dec27 20:03] overlayfs: idmapped layers are currently not supported
	[ +31.019083] overlayfs: idmapped layers are currently not supported
	[Dec27 20:04] overlayfs: idmapped layers are currently not supported
	[Dec27 20:05] overlayfs: idmapped layers are currently not supported
	[Dec27 20:06] overlayfs: idmapped layers are currently not supported
	[Dec27 20:07] overlayfs: idmapped layers are currently not supported
	[  +3.687478] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [79f65bc2e1dbcf7ebe07acaf2143b45f059da3390e107fc3eb87595ccc5f920d] <==
	{"level":"warn","ts":"2025-12-27T20:15:20.608374Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.628158Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.638124Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.641050Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.641557Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.645534Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.652514Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.660117Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.667686Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.670414Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.673608Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.684487Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.694605Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.698184Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.701099Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.705085Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.708435Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.721780Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.730268Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.735006Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.741506Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.741628Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.744971Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.752424Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:20.759404Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:15:20 up  1:57,  0 user,  load average: 0.55, 1.07, 1.34
	Linux ha-422549 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [790f2c013c89e320d6ae1872fcbeb0dcede9e548fae087919a1d710b26587af9] <==
	I1227 20:14:49.450546       1 main.go:324] Node ha-422549-m04 has CIDR [10.244.3.0/24] 
	I1227 20:14:59.445558       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 20:14:59.445661       1 main.go:301] handling current node
	I1227 20:14:59.445700       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1227 20:14:59.445735       1 main.go:324] Node ha-422549-m02 has CIDR [10.244.1.0/24] 
	I1227 20:14:59.445899       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1227 20:14:59.445935       1 main.go:324] Node ha-422549-m03 has CIDR [10.244.2.0/24] 
	I1227 20:14:59.446020       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1227 20:14:59.446055       1 main.go:324] Node ha-422549-m04 has CIDR [10.244.3.0/24] 
	I1227 20:15:09.445623       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 20:15:09.445660       1 main.go:301] handling current node
	I1227 20:15:09.445676       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1227 20:15:09.445682       1 main.go:324] Node ha-422549-m02 has CIDR [10.244.1.0/24] 
	I1227 20:15:09.445872       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1227 20:15:09.445881       1 main.go:324] Node ha-422549-m03 has CIDR [10.244.2.0/24] 
	I1227 20:15:09.446114       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1227 20:15:09.446126       1 main.go:324] Node ha-422549-m04 has CIDR [10.244.3.0/24] 
	I1227 20:15:19.450346       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1227 20:15:19.450445       1 main.go:324] Node ha-422549-m04 has CIDR [10.244.3.0/24] 
	I1227 20:15:19.450623       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 20:15:19.450662       1 main.go:301] handling current node
	I1227 20:15:19.450700       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1227 20:15:19.450749       1 main.go:324] Node ha-422549-m02 has CIDR [10.244.1.0/24] 
	I1227 20:15:19.450842       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1227 20:15:19.450875       1 main.go:324] Node ha-422549-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [a2c772463ab69455651df640481fbedb03fe6400b56096056428e79c07be9499] <==
	I1227 20:09:16.090173       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:09:16.142608       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 20:09:16.165012       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:09:16.188215       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:16.247286       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 20:09:17.588850       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:09:17.588862       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 20:09:17.591046       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:09:17.591196       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:17.591213       1 policy_source.go:248] refreshing policies
	I1227 20:09:17.594498       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:09:17.632882       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 20:09:18.590962       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 20:09:18.719267       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:09:18.730017       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1227 20:09:18.736565       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1227 20:09:18.757260       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:09:18.776199       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:09:18.793727       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 20:09:18.793809       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	W1227 20:09:18.871915       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	W1227 20:09:38.848605       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1227 20:09:50.148007       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:09:50.298023       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:10:40.117662       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-apiserver [c3f87ac29708d39b5580f953e8ccc765b36b830cf405bc7750b8afe798a15a77] <==
	{"level":"warn","ts":"2025-12-27T20:08:34.277834Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400203fc20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277853Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400144c3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277870Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400203f2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277886Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002670b40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277902Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40029112c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277921Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001cc2f00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277917Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021472c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277951Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002a345a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277938Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002cbb2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277969Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002671c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277982Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026703c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278004Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f51c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278007Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40029fd680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278023Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400144cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278027Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002d0ef00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278040Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002cba960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278044Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002d0ef00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278056Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026714a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278062Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000ea3c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278071Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400102d2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278373Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021472c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	F1227 20:08:39.300772       1 hooks.go:204] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	{"level":"warn","ts":"2025-12-27T20:08:39.399795Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400102d2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	E1227 20:08:39.400034       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	
	
	==> kube-controller-manager [200f949dea5c60d38a5d90e0270e6343a89f068bd2083ee55915c81023b0e023] <==
	I1227 20:08:47.677940       1 serving.go:386] Generated self-signed cert in-memory
	I1227 20:08:47.685798       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1227 20:08:47.685893       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:08:47.687365       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1227 20:08:47.687564       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1227 20:08:47.687645       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1227 20:08:47.687811       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1227 20:08:57.704670       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [810850466f08e002011f0d991e32eb0109be47db69714d6e333a070593589ffc] <==
	I1227 20:09:49.817998       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.818055       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.818125       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.818182       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.818296       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.818398       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 20:09:49.823879       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.824187       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.824238       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.824323       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.826908       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m04"
	I1227 20:09:49.826980       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549"
	I1227 20:09:49.827019       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m02"
	I1227 20:09:49.827146       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m03"
	I1227 20:09:49.831582       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.831626       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.831651       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.837170       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 20:09:49.903784       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.914954       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.915054       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:09:49.915069       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:10:39.887314       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-422549-m04"
	I1227 20:10:39.888758       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-422549-m04"
	I1227 20:10:40.332581       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="PartialDisruption"
	
	
	==> kube-proxy [0dc7fc3f72aac5f705d9afdbd65e7c9da34760b5dcbc880ecf6236b8d0c7a88c] <==
	I1227 20:09:19.404089       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:09:19.491223       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:09:19.592597       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:19.592728       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1227 20:09:19.592858       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:09:19.644888       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:09:19.644944       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:09:19.649692       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:09:19.649993       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:09:19.650014       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:09:19.652082       1 config.go:200] "Starting service config controller"
	I1227 20:09:19.652103       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:09:19.652121       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:09:19.652124       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:09:19.652134       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:09:19.652138       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:09:19.652805       1 config.go:309] "Starting node config controller"
	I1227 20:09:19.652821       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:09:19.652829       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:09:19.753198       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 20:09:19.753207       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:09:19.753242       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dd811e752da4c2025246e605ecc1690aba8141353e20fb91cdad4468a1c059f9] <==
	E1227 20:08:19.506524       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 20:08:19.569107       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:08:20.320229       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:08:20.376812       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:08:21.129930       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 20:08:39.022443       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 20:08:43.570864       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:08:47.134070       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:08:48.738392       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 20:08:49.986460       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 20:08:49.987992       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 20:08:50.727843       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 20:08:50.956450       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:08:51.960069       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:08:53.165271       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:08:57.344100       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 20:08:59.543840       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:09:01.253158       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:09:01.270041       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:09:01.345742       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:09:01.466100       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 20:09:02.611833       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:09:09.548910       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:09:10.555054       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	I1227 20:09:56.031915       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:10:29 ha-422549 kubelet[804]: I1227 20:10:29.927768     804 kubelet.go:3323] "Trying to delete pod" pod="kube-system/kube-vip-ha-422549" podUID="27494a9a-1459-4c40-99d3-c3e21df433ef"
	Dec 27 20:10:29 ha-422549 kubelet[804]: I1227 20:10:29.944622     804 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-422549"
	Dec 27 20:10:29 ha-422549 kubelet[804]: I1227 20:10:29.944659     804 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-422549"
	Dec 27 20:11:02 ha-422549 kubelet[804]: E1227 20:11:02.926814     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:11:12 ha-422549 kubelet[804]: E1227 20:11:12.927477     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:11:13 ha-422549 kubelet[804]: E1227 20:11:13.926597     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:11:14 ha-422549 kubelet[804]: E1227 20:11:14.926505     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-ha-422549" containerName="kube-scheduler"
	Dec 27 20:11:33 ha-422549 kubelet[804]: E1227 20:11:33.928211     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	Dec 27 20:11:45 ha-422549 kubelet[804]: E1227 20:11:45.927376     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-ha-422549" containerName="etcd"
	Dec 27 20:12:25 ha-422549 kubelet[804]: E1227 20:12:25.926700     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:12:39 ha-422549 kubelet[804]: E1227 20:12:39.927819     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:12:41 ha-422549 kubelet[804]: E1227 20:12:41.928937     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:12:44 ha-422549 kubelet[804]: E1227 20:12:44.927340     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-ha-422549" containerName="kube-scheduler"
	Dec 27 20:12:52 ha-422549 kubelet[804]: E1227 20:12:52.926348     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	Dec 27 20:13:04 ha-422549 kubelet[804]: E1227 20:13:04.927081     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-ha-422549" containerName="etcd"
	Dec 27 20:13:35 ha-422549 kubelet[804]: E1227 20:13:35.927017     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:13:53 ha-422549 kubelet[804]: E1227 20:13:53.926931     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:14:05 ha-422549 kubelet[804]: E1227 20:14:05.927026     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	Dec 27 20:14:09 ha-422549 kubelet[804]: E1227 20:14:09.926884     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:14:11 ha-422549 kubelet[804]: E1227 20:14:11.927165     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-ha-422549" containerName="kube-scheduler"
	Dec 27 20:14:21 ha-422549 kubelet[804]: E1227 20:14:21.927398     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-ha-422549" containerName="etcd"
	Dec 27 20:14:55 ha-422549 kubelet[804]: E1227 20:14:55.927938     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:15:04 ha-422549 kubelet[804]: E1227 20:15:04.926424     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:15:16 ha-422549 kubelet[804]: E1227 20:15:16.927222     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-ha-422549" containerName="kube-scheduler"
	Dec 27 20:15:20 ha-422549 kubelet[804]: E1227 20:15:20.926597     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-422549 -n ha-422549
helpers_test.go:270: (dbg) Run:  kubectl --context ha-422549 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (5.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (4.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-422549" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-422549\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-422549\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.35.0\",\"ClusterName\":\"ha-422549\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvid
ia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizat
ions\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000,\"Rosetta\":false},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-422549
helpers_test.go:244: (dbg) docker inspect ha-422549:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf",
	        "Created": "2025-12-27T20:03:01.682141141Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 319429,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:07:23.280905445Z",
	            "FinishedAt": "2025-12-27T20:07:22.683216546Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/hostname",
	        "HostsPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/hosts",
	        "LogPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf-json.log",
	        "Name": "/ha-422549",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-422549:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-422549",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf",
	                "LowerDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/merged",
	                "UpperDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/diff",
	                "WorkDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-422549",
	                "Source": "/var/lib/docker/volumes/ha-422549/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422549",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422549",
	                "name.minikube.sigs.k8s.io": "ha-422549",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "28e77342f2c4751026f399b040de05177304716ac6aab83b39b3d9c47cebffe7",
	            "SandboxKey": "/var/run/docker/netns/28e77342f2c4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422549": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:36:09:aa:37:bf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9521cb9225c5842f69a8435c5cf5485b75f9a8b2c68158742ff27c2be32f5951",
	                    "EndpointID": "a460c21f8bbd3e3cd9f593131304327baa8422b2d75f0ce1ac3c5c098867a970",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422549",
	                        "53fd780c3df5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-422549 -n ha-422549
helpers_test.go:253: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-422549 logs -n 25: (2.133685379s)
helpers_test.go:261: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-422549 ssh -n ha-422549-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m02 sudo cat /home/docker/cp-test_ha-422549-m03_ha-422549-m02.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m03:/home/docker/cp-test.txt ha-422549-m04:/home/docker/cp-test_ha-422549-m03_ha-422549-m04.txt               │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test_ha-422549-m03_ha-422549-m04.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp testdata/cp-test.txt ha-422549-m04:/home/docker/cp-test.txt                                                             │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3848759327/001/cp-test_ha-422549-m04.txt │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549:/home/docker/cp-test_ha-422549-m04_ha-422549.txt                       │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549 sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549.txt                                                 │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549-m02:/home/docker/cp-test_ha-422549-m04_ha-422549-m02.txt               │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m02 sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549-m02.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549-m03:/home/docker/cp-test_ha-422549-m04_ha-422549-m03.txt               │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m03 sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549-m03.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ node    │ ha-422549 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ node    │ ha-422549 node start m02 --alsologtostderr -v 5                                                                                      │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ node    │ ha-422549 node list --alsologtostderr -v 5                                                                                           │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │                     │
	│ stop    │ ha-422549 stop --alsologtostderr -v 5                                                                                                │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:07 UTC │
	│ start   │ ha-422549 start --wait true --alsologtostderr -v 5                                                                                   │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:07 UTC │                     │
	│ node    │ ha-422549 node list --alsologtostderr -v 5                                                                                           │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:15 UTC │                     │
	│ node    │ ha-422549 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:15 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:07:23
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:07:23.018829  319301 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:07:23.019045  319301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:07:23.019069  319301 out.go:374] Setting ErrFile to fd 2...
	I1227 20:07:23.019104  319301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:07:23.019417  319301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:07:23.019931  319301 out.go:368] Setting JSON to false
	I1227 20:07:23.020994  319301 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6595,"bootTime":1766859448,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:07:23.021172  319301 start.go:143] virtualization:  
	I1227 20:07:23.026478  319301 out.go:179] * [ha-422549] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:07:23.029624  319301 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:07:23.029657  319301 notify.go:221] Checking for updates...
	I1227 20:07:23.035732  319301 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:07:23.038626  319301 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:07:23.041521  319301 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:07:23.044303  319301 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:07:23.047245  319301 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:07:23.050815  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:07:23.050954  319301 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:07:23.074861  319301 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:07:23.074978  319301 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:07:23.134894  319301 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 20:07:23.1261821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:07:23.135004  319301 docker.go:319] overlay module found
	I1227 20:07:23.138113  319301 out.go:179] * Using the docker driver based on existing profile
	I1227 20:07:23.140925  319301 start.go:309] selected driver: docker
	I1227 20:07:23.140943  319301 start.go:928] validating driver "docker" against &{Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:07:23.141082  319301 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:07:23.141181  319301 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:07:23.197269  319301 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 20:07:23.188068839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:07:23.197711  319301 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:07:23.197745  319301 cni.go:84] Creating CNI manager for ""
	I1227 20:07:23.197797  319301 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1227 20:07:23.197857  319301 start.go:353] cluster config:
	{Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:07:23.202906  319301 out.go:179] * Starting "ha-422549" primary control-plane node in "ha-422549" cluster
	I1227 20:07:23.205659  319301 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:07:23.208577  319301 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:07:23.211352  319301 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:07:23.211401  319301 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:07:23.211416  319301 cache.go:65] Caching tarball of preloaded images
	I1227 20:07:23.211429  319301 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:07:23.211499  319301 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:07:23.211509  319301 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:07:23.211655  319301 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:07:23.229712  319301 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:07:23.229734  319301 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:07:23.229749  319301 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:07:23.229779  319301 start.go:360] acquireMachinesLock for ha-422549: {Name:mk939e8ee4c2bedc86cc6a99d76298e7b2a26ce2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:07:23.229835  319301 start.go:364] duration metric: took 35.657µs to acquireMachinesLock for "ha-422549"
	I1227 20:07:23.229869  319301 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:07:23.229878  319301 fix.go:54] fixHost starting: 
	I1227 20:07:23.230138  319301 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:07:23.246992  319301 fix.go:112] recreateIfNeeded on ha-422549: state=Stopped err=<nil>
	W1227 20:07:23.247025  319301 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:07:23.250226  319301 out.go:252] * Restarting existing docker container for "ha-422549" ...
	I1227 20:07:23.250324  319301 cli_runner.go:164] Run: docker start ha-422549
	I1227 20:07:23.503347  319301 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:07:23.526447  319301 kic.go:430] container "ha-422549" state is running.
	I1227 20:07:23.526916  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:07:23.555271  319301 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:07:23.555509  319301 machine.go:94] provisionDockerMachine start ...
	I1227 20:07:23.555569  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:23.577158  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:23.577524  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1227 20:07:23.577542  319301 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:07:23.578121  319301 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44738->127.0.0.1:33173: read: connection reset by peer
	I1227 20:07:26.720977  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549
	
	I1227 20:07:26.721006  319301 ubuntu.go:182] provisioning hostname "ha-422549"
	I1227 20:07:26.721067  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:26.738818  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:26.739131  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1227 20:07:26.739148  319301 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549 && echo "ha-422549" | sudo tee /etc/hostname
	I1227 20:07:26.886109  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549
	
	I1227 20:07:26.886195  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:26.903863  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:26.904173  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1227 20:07:26.904194  319301 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:07:27.041724  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:07:27.041750  319301 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:07:27.041786  319301 ubuntu.go:190] setting up certificates
	I1227 20:07:27.041803  319301 provision.go:84] configureAuth start
	I1227 20:07:27.041869  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:07:27.060364  319301 provision.go:143] copyHostCerts
	I1227 20:07:27.060422  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:07:27.060455  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:07:27.060473  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:07:27.060550  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:07:27.060645  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:07:27.060668  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:07:27.060679  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:07:27.060709  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:07:27.060761  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:07:27.060783  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:07:27.060791  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:07:27.060818  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:07:27.060870  319301 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549 san=[127.0.0.1 192.168.49.2 ha-422549 localhost minikube]
	I1227 20:07:27.239677  319301 provision.go:177] copyRemoteCerts
	I1227 20:07:27.239745  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:07:27.239800  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:27.259369  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:07:27.364829  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:07:27.364890  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:07:27.382288  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:07:27.382362  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1227 20:07:27.399154  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:07:27.399213  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:07:27.417099  319301 provision.go:87] duration metric: took 375.277706ms to configureAuth
	I1227 20:07:27.417133  319301 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:07:27.417387  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:07:27.417527  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:27.434441  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:27.434764  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1227 20:07:27.434789  319301 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:07:27.806912  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:07:27.806938  319301 machine.go:97] duration metric: took 4.251419469s to provisionDockerMachine
	I1227 20:07:27.806950  319301 start.go:293] postStartSetup for "ha-422549" (driver="docker")
	I1227 20:07:27.806961  319301 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:07:27.807018  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:07:27.807063  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:27.827185  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:07:27.924757  319301 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:07:27.927910  319301 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:07:27.927939  319301 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:07:27.927951  319301 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:07:27.928034  319301 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:07:27.928163  319301 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:07:27.928176  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:07:27.928319  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:07:27.935125  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:07:27.951297  319301 start.go:296] duration metric: took 144.328969ms for postStartSetup
	I1227 20:07:27.951425  319301 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:07:27.951489  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:27.968679  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:07:28.062963  319301 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:07:28.068245  319301 fix.go:56] duration metric: took 4.838360246s for fixHost
	I1227 20:07:28.068273  319301 start.go:83] releasing machines lock for "ha-422549", held for 4.838415218s
	I1227 20:07:28.068391  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:07:28.086189  319301 ssh_runner.go:195] Run: cat /version.json
	I1227 20:07:28.086242  319301 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:07:28.086251  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:28.086297  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:07:28.112515  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:07:28.119040  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:07:28.213229  319301 ssh_runner.go:195] Run: systemctl --version
	I1227 20:07:28.307265  319301 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:07:28.344982  319301 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:07:28.349307  319301 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:07:28.349416  319301 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:07:28.357039  319301 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:07:28.357061  319301 start.go:496] detecting cgroup driver to use...
	I1227 20:07:28.357091  319301 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:07:28.357187  319301 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:07:28.372341  319301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:07:28.385115  319301 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:07:28.385188  319301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:07:28.400803  319301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:07:28.413692  319301 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:07:28.520682  319301 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:07:28.638372  319301 docker.go:234] disabling docker service ...
	I1227 20:07:28.638476  319301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:07:28.652726  319301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:07:28.665221  319301 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:07:28.769753  319301 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:07:28.887106  319301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:07:28.901250  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:07:28.915594  319301 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:07:28.915656  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.923915  319301 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:07:28.924023  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.932251  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.940443  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.948974  319301 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:07:28.956576  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.964831  319301 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.973077  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:28.981210  319301 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:07:28.988289  319301 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:07:28.995419  319301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:07:29.102806  319301 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:07:29.272446  319301 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:07:29.272527  319301 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:07:29.276338  319301 start.go:574] Will wait 60s for crictl version
	I1227 20:07:29.276409  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:07:29.279905  319301 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:07:29.303871  319301 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:07:29.303984  319301 ssh_runner.go:195] Run: crio --version
	I1227 20:07:29.330697  319301 ssh_runner.go:195] Run: crio --version
	I1227 20:07:29.362339  319301 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:07:29.365125  319301 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:07:29.381233  319301 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:07:29.385291  319301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:07:29.396534  319301 kubeadm.go:884] updating cluster {Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:07:29.396713  319301 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:07:29.396766  319301 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:07:29.430374  319301 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:07:29.430399  319301 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:07:29.430457  319301 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:07:29.459783  319301 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:07:29.459805  319301 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:07:29.459813  319301 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I1227 20:07:29.459907  319301 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:07:29.459984  319301 ssh_runner.go:195] Run: crio config
	I1227 20:07:29.529648  319301 cni.go:84] Creating CNI manager for ""
	I1227 20:07:29.529684  319301 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1227 20:07:29.529702  319301 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:07:29.529745  319301 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422549 NodeName:ha-422549 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:07:29.529880  319301 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422549"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:07:29.529906  319301 kube-vip.go:115] generating kube-vip config ...
	I1227 20:07:29.529981  319301 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 20:07:29.541823  319301 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:07:29.541926  319301 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 20:07:29.541995  319301 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:07:29.549349  319301 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:07:29.549419  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1227 20:07:29.556490  319301 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1227 20:07:29.568355  319301 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:07:29.580790  319301 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1227 20:07:29.593175  319301 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 20:07:29.606173  319301 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:07:29.609837  319301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:07:29.619217  319301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:07:29.735123  319301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:07:29.750389  319301 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.2
	I1227 20:07:29.750412  319301 certs.go:195] generating shared ca certs ...
	I1227 20:07:29.750427  319301 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:29.750619  319301 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:07:29.750682  319301 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:07:29.750699  319301 certs.go:257] generating profile certs ...
	I1227 20:07:29.750812  319301 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key
	I1227 20:07:29.751056  319301 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.743f7ef3
	I1227 20:07:29.751077  319301 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt.743f7ef3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1227 20:07:30.216987  319301 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt.743f7ef3 ...
	I1227 20:07:30.217024  319301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt.743f7ef3: {Name:mk5110c0017b8f4cda34fa079f107b622b8f9c47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:30.217226  319301 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.743f7ef3 ...
	I1227 20:07:30.217243  319301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.743f7ef3: {Name:mkb171a8982d80a151baacbc9fe03fa941196fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:30.217342  319301 certs.go:382] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt.743f7ef3 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt
	I1227 20:07:30.217509  319301 certs.go:386] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.743f7ef3 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key
	I1227 20:07:30.217676  319301 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key
	I1227 20:07:30.217696  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:07:30.217721  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:07:30.217741  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:07:30.217759  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:07:30.217776  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:07:30.217799  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:07:30.217821  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:07:30.217837  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:07:30.217893  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:07:30.217940  319301 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:07:30.217953  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:07:30.217981  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:07:30.218009  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:07:30.218040  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:07:30.218095  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:07:30.218156  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:07:30.218174  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:07:30.218188  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:07:30.218745  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:07:30.239060  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:07:30.258056  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:07:30.279983  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:07:30.299163  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:07:30.317066  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:07:30.333792  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:07:30.363380  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:07:30.383880  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:07:30.402563  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:07:30.424158  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:07:30.441364  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:07:30.455028  319301 ssh_runner.go:195] Run: openssl version
	I1227 20:07:30.462193  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:07:30.476783  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:07:30.488736  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:07:30.492787  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:07:30.492869  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:07:30.601338  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:07:30.618710  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:07:30.629367  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:07:30.641908  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:07:30.646861  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:07:30.646946  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:07:30.713797  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:07:30.723031  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:07:30.735659  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:07:30.746061  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:07:30.750487  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:07:30.750578  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:07:30.818577  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:07:30.827800  319301 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:07:30.835007  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:07:30.906833  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:07:30.969599  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:07:31.044468  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:07:31.106453  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:07:31.155733  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:07:31.197366  319301 kubeadm.go:401] StartCluster: {Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:07:31.197537  319301 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:07:31.197613  319301 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:07:31.226634  319301 cri.go:96] found id: "c3f87ac29708d39b5580f953e8ccc765b36b830cf405bc7750b8afe798a15a77"
	I1227 20:07:31.226665  319301 cri.go:96] found id: "79f65bc2e1dbcf7ebe07acaf2143b45f059da3390e107fc3eb87595ccc5f920d"
	I1227 20:07:31.226671  319301 cri.go:96] found id: "dd811e752da4c2025246e605ecc1690aba8141353e20fb91cdad4468a1c059f9"
	I1227 20:07:31.226675  319301 cri.go:96] found id: "feeed30c26dbbb06391e6c43a6d6041af28ce218eaf23eec819dc38cda9444e8"
	I1227 20:07:31.226679  319301 cri.go:96] found id: "bbf24a80fc638071d98a1cc08ab823b436cc206cb456eac7a8be7958d11889db"
	I1227 20:07:31.226683  319301 cri.go:96] found id: ""
	I1227 20:07:31.226745  319301 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:07:31.244824  319301 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:07:31Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:07:31.244903  319301 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:07:31.257811  319301 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:07:31.257842  319301 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:07:31.257908  319301 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:07:31.270645  319301 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:07:31.271073  319301 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-422549" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:07:31.271185  319301 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-272475/kubeconfig needs updating (will repair): [kubeconfig missing "ha-422549" cluster setting kubeconfig missing "ha-422549" context setting]
	I1227 20:07:31.271518  319301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:31.272112  319301 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 20:07:31.272794  319301 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 20:07:31.272816  319301 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 20:07:31.272823  319301 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 20:07:31.272851  319301 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1227 20:07:31.272828  319301 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 20:07:31.272895  319301 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 20:07:31.272900  319301 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 20:07:31.273215  319301 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:07:31.284048  319301 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1227 20:07:31.284081  319301 kubeadm.go:602] duration metric: took 26.232251ms to restartPrimaryControlPlane
	I1227 20:07:31.284090  319301 kubeadm.go:403] duration metric: took 86.73489ms to StartCluster
	I1227 20:07:31.284107  319301 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:31.284175  319301 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:07:31.284780  319301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:07:31.284997  319301 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:07:31.285023  319301 start.go:242] waiting for startup goroutines ...
	I1227 20:07:31.285032  319301 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:07:31.285574  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:07:31.290925  319301 out.go:179] * Enabled addons: 
	I1227 20:07:31.294082  319301 addons.go:530] duration metric: took 9.037764ms for enable addons: enabled=[]
	I1227 20:07:31.294137  319301 start.go:247] waiting for cluster config update ...
	I1227 20:07:31.294152  319301 start.go:256] writing updated cluster config ...
	I1227 20:07:31.297568  319301 out.go:203] 
	I1227 20:07:31.300820  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:07:31.300937  319301 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:07:31.304320  319301 out.go:179] * Starting "ha-422549-m02" control-plane node in "ha-422549" cluster
	I1227 20:07:31.306983  319301 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:07:31.309971  319301 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:07:31.312773  319301 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:07:31.312796  319301 cache.go:65] Caching tarball of preloaded images
	I1227 20:07:31.312889  319301 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:07:31.312906  319301 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:07:31.313029  319301 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:07:31.313257  319301 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:07:31.349637  319301 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:07:31.349662  319301 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:07:31.349676  319301 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:07:31.349708  319301 start.go:360] acquireMachinesLock for ha-422549-m02: {Name:mk8fc7aa5d6c41749cc4b9db094e3fd243d8b868 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:07:31.349765  319301 start.go:364] duration metric: took 37.299µs to acquireMachinesLock for "ha-422549-m02"
	I1227 20:07:31.349791  319301 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:07:31.349796  319301 fix.go:54] fixHost starting: m02
	I1227 20:07:31.350055  319301 cli_runner.go:164] Run: docker container inspect ha-422549-m02 --format={{.State.Status}}
	I1227 20:07:31.391676  319301 fix.go:112] recreateIfNeeded on ha-422549-m02: state=Stopped err=<nil>
	W1227 20:07:31.391706  319301 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:07:31.394953  319301 out.go:252] * Restarting existing docker container for "ha-422549-m02" ...
	I1227 20:07:31.395043  319301 cli_runner.go:164] Run: docker start ha-422549-m02
	I1227 20:07:31.777922  319301 cli_runner.go:164] Run: docker container inspect ha-422549-m02 --format={{.State.Status}}
	I1227 20:07:31.805184  319301 kic.go:430] container "ha-422549-m02" state is running.
	I1227 20:07:31.805591  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:07:31.841697  319301 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:07:31.841951  319301 machine.go:94] provisionDockerMachine start ...
	I1227 20:07:31.842022  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:31.865663  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:31.865982  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1227 20:07:31.865998  319301 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:07:31.866584  319301 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58412->127.0.0.1:33178: read: connection reset by peer
	I1227 20:07:35.045099  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m02
	
	I1227 20:07:35.045161  319301 ubuntu.go:182] provisioning hostname "ha-422549-m02"
	I1227 20:07:35.045260  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:35.074417  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:35.074732  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1227 20:07:35.074750  319301 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549-m02 && echo "ha-422549-m02" | sudo tee /etc/hostname
	I1227 20:07:35.272951  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m02
	
	I1227 20:07:35.273095  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:35.310855  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:35.311167  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1227 20:07:35.311187  319301 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:07:35.489398  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:07:35.489483  319301 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:07:35.489515  319301 ubuntu.go:190] setting up certificates
	I1227 20:07:35.489552  319301 provision.go:84] configureAuth start
	I1227 20:07:35.489651  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:07:35.519140  319301 provision.go:143] copyHostCerts
	I1227 20:07:35.519180  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:07:35.519212  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:07:35.519219  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:07:35.519305  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:07:35.519384  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:07:35.519400  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:07:35.519405  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:07:35.519428  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:07:35.519467  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:07:35.519482  319301 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:07:35.519486  319301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:07:35.519508  319301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:07:35.519552  319301 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549-m02 san=[127.0.0.1 192.168.49.3 ha-422549-m02 localhost minikube]
	I1227 20:07:35.673804  319301 provision.go:177] copyRemoteCerts
	I1227 20:07:35.676274  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:07:35.676362  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:35.700203  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:07:35.810686  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:07:35.810802  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 20:07:35.827198  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:07:35.827254  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:07:35.847940  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:07:35.848040  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:07:35.870095  319301 provision.go:87] duration metric: took 380.509887ms to configureAuth
	I1227 20:07:35.870124  319301 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:07:35.870422  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:07:35.870563  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:35.893611  319301 main.go:144] libmachine: Using SSH client type: native
	I1227 20:07:35.893918  319301 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1227 20:07:35.893932  319301 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:07:36.282435  319301 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:07:36.282459  319301 machine.go:97] duration metric: took 4.440490595s to provisionDockerMachine
	I1227 20:07:36.282470  319301 start.go:293] postStartSetup for "ha-422549-m02" (driver="docker")
	I1227 20:07:36.282505  319301 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:07:36.282595  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:07:36.282666  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:36.301003  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:07:36.402628  319301 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:07:36.406068  319301 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:07:36.406097  319301 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:07:36.406108  319301 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:07:36.406247  319301 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:07:36.406355  319301 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:07:36.406371  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:07:36.406502  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:07:36.414126  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:07:36.431291  319301 start.go:296] duration metric: took 148.805898ms for postStartSetup
	I1227 20:07:36.431373  319301 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:07:36.431417  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:36.449358  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:07:36.546713  319301 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:07:36.551629  319301 fix.go:56] duration metric: took 5.201823785s for fixHost
	I1227 20:07:36.551655  319301 start.go:83] releasing machines lock for "ha-422549-m02", held for 5.20187627s
	I1227 20:07:36.551729  319301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:07:36.571695  319301 out.go:179] * Found network options:
	I1227 20:07:36.574736  319301 out.go:179]   - NO_PROXY=192.168.49.2
	W1227 20:07:36.577654  319301 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:07:36.577694  319301 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 20:07:36.577781  319301 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:07:36.577827  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:36.578074  319301 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:07:36.578134  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:07:36.598248  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:07:36.598898  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:07:36.873888  319301 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:07:36.879823  319301 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:07:36.879937  319301 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:07:36.899888  319301 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:07:36.899953  319301 start.go:496] detecting cgroup driver to use...
	I1227 20:07:36.899997  319301 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:07:36.900076  319301 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:07:36.928970  319301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:07:36.947727  319301 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:07:36.947845  319301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:07:36.967863  319301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:07:36.998332  319301 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:07:37.167619  319301 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:07:37.326628  319301 docker.go:234] disabling docker service ...
	I1227 20:07:37.326748  319301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:07:37.341981  319301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:07:37.354777  319301 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:07:37.613409  319301 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:07:37.870750  319301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:07:37.886152  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:07:37.906254  319301 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:07:37.906377  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.926031  319301 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:07:37.926143  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.937485  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.946425  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.958890  319301 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:07:37.968858  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.978269  319301 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.986277  319301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:07:37.995011  319301 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:07:38.002468  319301 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:07:38.010027  319301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:07:38.207437  319301 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:09:08.647737  319301 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.440260784s)
	I1227 20:09:08.647767  319301 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:09:08.647821  319301 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:09:08.651981  319301 start.go:574] Will wait 60s for crictl version
	I1227 20:09:08.652048  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:09:08.655690  319301 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:09:08.681479  319301 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:09:08.681565  319301 ssh_runner.go:195] Run: crio --version
	I1227 20:09:08.713332  319301 ssh_runner.go:195] Run: crio --version
	I1227 20:09:08.746336  319301 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:09:08.749205  319301 out.go:179]   - env NO_PROXY=192.168.49.2
	I1227 20:09:08.752182  319301 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:09:08.768090  319301 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:09:08.771937  319301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:09:08.781622  319301 mustload.go:66] Loading cluster: ha-422549
	I1227 20:09:08.781869  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:09:08.782144  319301 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:09:08.798634  319301 host.go:66] Checking if "ha-422549" exists ...
	I1227 20:09:08.798913  319301 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.3
	I1227 20:09:08.798926  319301 certs.go:195] generating shared ca certs ...
	I1227 20:09:08.798941  319301 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:09:08.799067  319301 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:09:08.799116  319301 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:09:08.799129  319301 certs.go:257] generating profile certs ...
	I1227 20:09:08.799210  319301 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key
	I1227 20:09:08.799280  319301 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.982843aa
	I1227 20:09:08.799324  319301 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key
	I1227 20:09:08.799337  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:09:08.799350  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:09:08.799367  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:09:08.799386  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:09:08.799406  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:09:08.799422  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:09:08.799438  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:09:08.799453  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:09:08.799510  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:09:08.799546  319301 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:09:08.799559  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:09:08.799588  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:09:08.799617  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:09:08.799646  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:09:08.799694  319301 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:09:08.799727  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:09:08.799744  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:09:08.799758  319301 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:09:08.799822  319301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:09:08.817939  319301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:09:08.909783  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1227 20:09:08.913788  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1227 20:09:08.922116  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1227 20:09:08.925553  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1227 20:09:08.933735  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1227 20:09:08.937584  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1227 20:09:08.946742  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1227 20:09:08.951033  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1227 20:09:08.959969  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1227 20:09:08.963648  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1227 20:09:08.971803  319301 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1227 20:09:08.975349  319301 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1227 20:09:08.983445  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:09:09.001559  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:09:09.020775  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:09:09.041958  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:09:09.059931  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:09:09.076796  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:09:09.095447  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:09:09.113037  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:09:09.130903  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:09:09.148555  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:09:09.167075  319301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:09:09.184251  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1227 20:09:09.197053  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1227 20:09:09.209869  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1227 20:09:09.223329  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1227 20:09:09.236109  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1227 20:09:09.249524  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1227 20:09:09.262558  319301 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (728 bytes)
	I1227 20:09:09.278766  319301 ssh_runner.go:195] Run: openssl version
	I1227 20:09:09.288173  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:09:09.303263  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:09:09.312839  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:09:09.317343  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:09:09.317435  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:09:09.358946  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:09:09.366603  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:09:09.374144  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:09:09.381566  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:09:09.385396  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:09:09.385483  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:09:09.427186  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:09:09.435033  319301 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:09:09.442740  319301 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:09:09.450736  319301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:09:09.455313  319301 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:09:09.455406  319301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:09:09.506456  319301 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:09:09.515191  319301 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:09:09.519143  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:09:09.560830  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:09:09.601733  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:09:09.642802  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:09:09.683557  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:09:09.724343  319301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:09:09.764937  319301 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.35.0 crio true true} ...
	I1227 20:09:09.765044  319301 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:09:09.765076  319301 kube-vip.go:115] generating kube-vip config ...
	I1227 20:09:09.765126  319301 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 20:09:09.777907  319301 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:09:09.778008  319301 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 20:09:09.778101  319301 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:09:09.785542  319301 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:09:09.785669  319301 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1227 20:09:09.793814  319301 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 20:09:09.808509  319301 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:09:09.822210  319301 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 20:09:09.836025  319301 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:09:09.840416  319301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:09:09.851735  319301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:09:09.987416  319301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:09:10.000958  319301 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:09:10.001514  319301 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:09:10.006801  319301 out.go:179] * Verifying Kubernetes components...
	I1227 20:09:10.009655  319301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:09:10.156826  319301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:09:10.171179  319301 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1227 20:09:10.171261  319301 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1227 20:09:10.171542  319301 node_ready.go:35] waiting up to 6m0s for node "ha-422549-m02" to be "Ready" ...
	I1227 20:09:13.107692  319301 node_ready.go:49] node "ha-422549-m02" is "Ready"
	I1227 20:09:13.107720  319301 node_ready.go:38] duration metric: took 2.936159281s for node "ha-422549-m02" to be "Ready" ...
	I1227 20:09:13.107734  319301 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:09:13.107789  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:13.607926  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:14.107987  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:14.607959  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:15.108981  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:15.607952  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:16.108673  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:16.608170  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:17.108757  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:17.608081  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:18.108738  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:18.608607  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:19.108699  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:19.608389  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:20.107908  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:20.608001  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:21.108548  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:21.608334  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:22.108180  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:22.607875  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:23.108675  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:23.608625  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:24.108180  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:24.608668  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:25.108754  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:25.607950  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:26.107930  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:26.607944  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:27.108744  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:27.608613  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:28.108398  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:28.608347  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:29.108513  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:29.607943  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:30.108298  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:30.607986  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:31.108862  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:31.608852  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:32.108838  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:32.608448  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:33.108526  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:33.608595  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:34.108250  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:34.607930  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:35.107952  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:35.608214  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:36.108509  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:36.608114  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:37.108454  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:37.607937  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:38.108594  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:38.607928  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:39.107995  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:39.608876  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:40.107937  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:40.607935  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:41.108437  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:41.607967  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:42.110329  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:42.608527  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:43.108197  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:43.608003  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:44.108494  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:44.608788  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:45.108779  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:45.608786  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:46.108080  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:46.608527  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:47.108485  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:47.608412  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:48.108174  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:48.608559  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:49.108719  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:49.608778  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:50.108396  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:50.608188  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:51.108854  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:51.607920  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:52.108260  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:52.607897  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:53.108165  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:53.608820  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:54.107921  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:54.608807  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:55.107966  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:55.608683  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:56.108704  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:56.608641  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:57.107949  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:57.608891  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:58.107911  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:58.607913  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:59.108124  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:09:59.608080  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:00.126668  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:00.607936  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:01.107972  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:01.607964  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:02.108918  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:02.608274  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:03.108889  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:03.607948  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:04.108838  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:04.608617  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:05.108707  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:05.608552  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:06.108350  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:06.607927  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:07.108601  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:07.607942  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:08.108292  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:08.607954  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:09.108836  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:09.608829  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:10.108562  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:10.108721  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:10.138615  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:10.138637  319301 cri.go:96] found id: ""
	I1227 20:10:10.138646  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:10.138711  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:10.143115  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:10.143189  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:10.173558  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:10.173579  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:10.173584  319301 cri.go:96] found id: ""
	I1227 20:10:10.173592  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:10.173653  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:10.178008  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:10.182191  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:10.182272  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:10.220643  319301 cri.go:96] found id: ""
	I1227 20:10:10.220668  319301 logs.go:282] 0 containers: []
	W1227 20:10:10.220677  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:10.220684  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:10.220746  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:10.250139  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:10.250162  319301 cri.go:96] found id: ""
	I1227 20:10:10.250170  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:10.250228  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:10.253966  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:10.254039  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:10.290311  319301 cri.go:96] found id: ""
	I1227 20:10:10.290334  319301 logs.go:282] 0 containers: []
	W1227 20:10:10.290343  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:10.290349  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:10.290422  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:10.319925  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:10.319948  319301 cri.go:96] found id: ""
	I1227 20:10:10.319974  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:10.320031  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:10.323821  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:10.323902  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:10.352069  319301 cri.go:96] found id: ""
	I1227 20:10:10.352091  319301 logs.go:282] 0 containers: []
	W1227 20:10:10.352100  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:10.352115  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:10.352127  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:10.451345  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:10.451385  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:10.469929  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:10.469961  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:10.875866  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:10.868032    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.868914    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.869711    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.870583    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.872332    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:10.868032    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.868914    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.869711    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.870583    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:10.872332    1482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:10.875894  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:10.875909  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:10.936407  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:10.936442  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:10.983671  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:10.983707  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:11.017260  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:11.017294  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:11.052563  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:11.052594  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:11.130184  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:11.130222  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:11.162524  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:11.162557  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:13.706075  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:13.716624  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:13.716698  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:13.747368  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:13.747388  319301 cri.go:96] found id: ""
	I1227 20:10:13.747396  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:13.747456  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:13.751096  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:13.751188  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:13.777717  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:13.777790  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:13.777802  319301 cri.go:96] found id: ""
	I1227 20:10:13.777811  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:13.777878  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:13.781548  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:13.785083  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:13.785193  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:13.811036  319301 cri.go:96] found id: ""
	I1227 20:10:13.811063  319301 logs.go:282] 0 containers: []
	W1227 20:10:13.811072  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:13.811079  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:13.811137  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:13.837822  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:13.837845  319301 cri.go:96] found id: ""
	I1227 20:10:13.837854  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:13.837911  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:13.841739  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:13.841856  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:13.868264  319301 cri.go:96] found id: ""
	I1227 20:10:13.868341  319301 logs.go:282] 0 containers: []
	W1227 20:10:13.868364  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:13.868387  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:13.868471  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:13.894511  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:13.894535  319301 cri.go:96] found id: ""
	I1227 20:10:13.894543  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:13.894621  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:13.898655  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:13.898764  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:13.924022  319301 cri.go:96] found id: ""
	I1227 20:10:13.924047  319301 logs.go:282] 0 containers: []
	W1227 20:10:13.924062  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:13.924077  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:13.924089  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:13.956536  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:13.956567  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:14.057854  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:14.057894  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:14.139219  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:14.129809    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.130897    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.132418    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.132833    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.134384    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:14.129809    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.130897    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.132418    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.132833    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:14.134384    1615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:14.139251  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:14.139265  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:14.182716  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:14.182750  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:14.208224  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:14.208301  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:14.225984  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:14.226016  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:14.256249  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:14.256314  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:14.301058  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:14.301201  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:14.329017  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:14.329046  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:16.906959  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:16.917912  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:16.917986  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:16.947235  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:16.947299  319301 cri.go:96] found id: ""
	I1227 20:10:16.947322  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:16.947404  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:16.951076  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:16.951204  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:16.984938  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:16.984962  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:16.984968  319301 cri.go:96] found id: ""
	I1227 20:10:16.984976  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:16.985053  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:16.988800  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:16.992512  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:16.992592  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:17.026764  319301 cri.go:96] found id: ""
	I1227 20:10:17.026789  319301 logs.go:282] 0 containers: []
	W1227 20:10:17.026798  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:17.026804  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:17.026875  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:17.053717  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:17.053741  319301 cri.go:96] found id: ""
	I1227 20:10:17.053749  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:17.053803  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:17.057601  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:17.057691  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:17.088432  319301 cri.go:96] found id: ""
	I1227 20:10:17.088455  319301 logs.go:282] 0 containers: []
	W1227 20:10:17.088464  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:17.088470  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:17.088529  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:17.115961  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:17.115985  319301 cri.go:96] found id: ""
	I1227 20:10:17.115995  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:17.116046  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:17.119890  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:17.119963  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:17.148631  319301 cri.go:96] found id: ""
	I1227 20:10:17.148654  319301 logs.go:282] 0 containers: []
	W1227 20:10:17.148663  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:17.148678  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:17.148694  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:17.240100  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:17.240138  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:17.259693  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:17.259725  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:17.291635  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:17.291666  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:17.368588  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:17.368624  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:17.407623  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:17.407652  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:17.475650  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:17.467352    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.467760    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.469497    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.470032    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.471718    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:17.467352    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.467760    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.469497    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.470032    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:17.471718    1761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:17.475719  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:17.475739  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:17.516294  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:17.516328  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:17.559509  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:17.559544  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:17.587296  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:17.587332  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:20.115472  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:20.126778  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:20.126847  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:20.153825  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:20.153850  319301 cri.go:96] found id: ""
	I1227 20:10:20.153859  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:20.153919  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:20.157682  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:20.157759  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:20.189317  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:20.189386  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:20.189420  319301 cri.go:96] found id: ""
	I1227 20:10:20.189493  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:20.189582  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:20.193669  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:20.197374  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:20.197473  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:20.237542  319301 cri.go:96] found id: ""
	I1227 20:10:20.237570  319301 logs.go:282] 0 containers: []
	W1227 20:10:20.237579  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:20.237585  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:20.237643  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:20.274313  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:20.274381  319301 cri.go:96] found id: ""
	I1227 20:10:20.274417  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:20.274509  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:20.279651  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:20.279718  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:20.306525  319301 cri.go:96] found id: ""
	I1227 20:10:20.306586  319301 logs.go:282] 0 containers: []
	W1227 20:10:20.306610  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:20.306636  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:20.306707  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:20.333808  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:20.333829  319301 cri.go:96] found id: ""
	I1227 20:10:20.333837  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:20.333927  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:20.337575  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:20.337677  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:20.372581  319301 cri.go:96] found id: ""
	I1227 20:10:20.372607  319301 logs.go:282] 0 containers: []
	W1227 20:10:20.372621  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:20.372636  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:20.372647  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:20.467758  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:20.467794  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:20.486495  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:20.486527  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:20.553188  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:20.545238    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.545758    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.547330    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.548070    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.549570    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:20.545238    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.545758    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.547330    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.548070    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:20.549570    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:20.553253  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:20.553282  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:20.580345  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:20.580374  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:20.626310  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:20.626345  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:20.670432  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:20.670467  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:20.696170  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:20.696199  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:20.730948  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:20.730976  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:20.805291  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:20.805325  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:23.351696  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:23.362369  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:23.362478  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:23.391572  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:23.391649  319301 cri.go:96] found id: ""
	I1227 20:10:23.391664  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:23.391739  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:23.395547  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:23.395671  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:23.422118  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:23.422141  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:23.422147  319301 cri.go:96] found id: ""
	I1227 20:10:23.422155  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:23.422235  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:23.426008  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:23.429336  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:23.429411  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:23.459272  319301 cri.go:96] found id: ""
	I1227 20:10:23.459299  319301 logs.go:282] 0 containers: []
	W1227 20:10:23.459308  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:23.459316  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:23.459398  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:23.484648  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:23.484671  319301 cri.go:96] found id: ""
	I1227 20:10:23.484679  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:23.484755  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:23.488422  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:23.488501  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:23.512953  319301 cri.go:96] found id: ""
	I1227 20:10:23.512978  319301 logs.go:282] 0 containers: []
	W1227 20:10:23.512987  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:23.512994  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:23.513049  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:23.538866  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:23.538889  319301 cri.go:96] found id: ""
	I1227 20:10:23.538898  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:23.538952  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:23.542487  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:23.542556  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:23.568959  319301 cri.go:96] found id: ""
	I1227 20:10:23.568985  319301 logs.go:282] 0 containers: []
	W1227 20:10:23.568994  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:23.569010  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:23.569023  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:23.614313  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:23.614346  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:23.639847  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:23.639875  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:23.671907  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:23.671936  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:23.702365  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:23.702394  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:23.783203  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:23.783246  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:23.884915  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:23.884948  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:23.902305  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:23.902337  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:23.970687  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:23.961560    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.962112    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.963576    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.964060    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.965635    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:23.961560    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.962112    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.963576    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.964060    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:23.965635    2032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:23.970722  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:23.970735  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:24.004792  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:24.004819  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:26.564703  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:26.575059  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:26.575143  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:26.604294  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:26.604317  319301 cri.go:96] found id: ""
	I1227 20:10:26.604326  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:26.604381  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:26.608875  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:26.608942  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:26.634574  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:26.634595  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:26.634600  319301 cri.go:96] found id: ""
	I1227 20:10:26.634607  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:26.634660  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:26.638317  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:26.641718  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:26.641787  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:26.670771  319301 cri.go:96] found id: ""
	I1227 20:10:26.670793  319301 logs.go:282] 0 containers: []
	W1227 20:10:26.670802  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:26.670808  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:26.670867  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:26.697344  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:26.697376  319301 cri.go:96] found id: ""
	I1227 20:10:26.697386  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:26.697491  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:26.701237  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:26.701344  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:26.726058  319301 cri.go:96] found id: ""
	I1227 20:10:26.726125  319301 logs.go:282] 0 containers: []
	W1227 20:10:26.726140  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:26.726147  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:26.726209  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:26.752574  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:26.752594  319301 cri.go:96] found id: ""
	I1227 20:10:26.752602  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:26.752658  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:26.756386  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:26.756457  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:26.786442  319301 cri.go:96] found id: ""
	I1227 20:10:26.786465  319301 logs.go:282] 0 containers: []
	W1227 20:10:26.786474  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:26.786488  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:26.786500  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:26.814367  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:26.814441  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:26.839989  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:26.840061  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:26.876712  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:26.876796  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:26.918742  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:26.918784  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:26.961668  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:26.961699  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:26.994123  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:26.994151  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:27.085553  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:27.085590  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:27.186397  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:27.186433  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:27.204121  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:27.204153  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:27.273016  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:27.262702    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.263577    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.265227    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.266801    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.267439    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:27.262702    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.263577    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.265227    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.266801    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:27.267439    2175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:29.773264  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:29.783744  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:29.783817  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:29.813744  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:29.813806  319301 cri.go:96] found id: ""
	I1227 20:10:29.813829  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:29.813919  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:29.818669  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:29.818786  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:29.844784  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:29.844802  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:29.844806  319301 cri.go:96] found id: ""
	I1227 20:10:29.844814  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:29.844868  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:29.848603  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:29.852078  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:29.852143  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:29.878788  319301 cri.go:96] found id: ""
	I1227 20:10:29.878814  319301 logs.go:282] 0 containers: []
	W1227 20:10:29.878823  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:29.878830  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:29.878890  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:29.908178  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:29.908200  319301 cri.go:96] found id: ""
	I1227 20:10:29.908209  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:29.908264  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:29.911793  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:29.911884  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:29.952724  319301 cri.go:96] found id: ""
	I1227 20:10:29.952749  319301 logs.go:282] 0 containers: []
	W1227 20:10:29.952759  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:29.952765  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:29.952855  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:30.008208  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:30.008289  319301 cri.go:96] found id: ""
	I1227 20:10:30.008312  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:30.008390  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:30.012672  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:30.012766  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:30.063201  319301 cri.go:96] found id: ""
	I1227 20:10:30.063273  319301 logs.go:282] 0 containers: []
	W1227 20:10:30.063297  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:30.063334  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:30.063369  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:30.152059  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:30.152097  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:30.188985  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:30.189011  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:30.288999  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:30.289079  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:30.307734  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:30.307764  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:30.354973  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:30.355008  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:30.425745  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:30.417740    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.418295    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.419807    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.420357    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.421985    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:30.417740    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.418295    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.419807    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.420357    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:30.421985    2269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:30.425773  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:30.425789  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:30.454739  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:30.454771  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:30.511002  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:30.511040  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:30.537495  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:30.537526  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:33.065805  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:33.076295  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:33.076418  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:33.103323  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:33.103346  319301 cri.go:96] found id: ""
	I1227 20:10:33.103356  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:33.103410  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:33.107007  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:33.107081  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:33.133167  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:33.133190  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:33.133195  319301 cri.go:96] found id: ""
	I1227 20:10:33.133203  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:33.133264  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:33.137298  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:33.141081  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:33.141152  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:33.167830  319301 cri.go:96] found id: ""
	I1227 20:10:33.167854  319301 logs.go:282] 0 containers: []
	W1227 20:10:33.167862  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:33.167869  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:33.167929  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:33.196531  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:33.196555  319301 cri.go:96] found id: ""
	I1227 20:10:33.196564  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:33.196621  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:33.200165  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:33.200267  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:33.226904  319301 cri.go:96] found id: ""
	I1227 20:10:33.226933  319301 logs.go:282] 0 containers: []
	W1227 20:10:33.226943  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:33.226950  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:33.227009  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:33.254111  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:33.254132  319301 cri.go:96] found id: ""
	I1227 20:10:33.254141  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:33.254197  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:33.258995  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:33.259128  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:33.285296  319301 cri.go:96] found id: ""
	I1227 20:10:33.285320  319301 logs.go:282] 0 containers: []
	W1227 20:10:33.285330  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:33.285350  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:33.285363  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:33.379312  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:33.379349  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:33.397669  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:33.397703  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:33.475423  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:33.464091    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.464710    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.467091    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.469890    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.471637    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:33.464091    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.464710    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.467091    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.469890    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:33.471637    2376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:33.475445  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:33.475462  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:33.505362  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:33.505391  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:33.549322  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:33.549353  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:33.592755  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:33.592789  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:33.625076  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:33.625105  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:33.676663  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:33.676692  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:33.703598  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:33.703627  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:36.283392  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:36.293854  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:36.293938  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:36.321425  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:36.321524  319301 cri.go:96] found id: ""
	I1227 20:10:36.321538  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:36.321604  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:36.325322  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:36.325393  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:36.354160  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:36.354182  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:36.354187  319301 cri.go:96] found id: ""
	I1227 20:10:36.354194  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:36.354250  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:36.357942  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:36.361261  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:36.361336  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:36.387328  319301 cri.go:96] found id: ""
	I1227 20:10:36.387356  319301 logs.go:282] 0 containers: []
	W1227 20:10:36.387366  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:36.387373  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:36.387431  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:36.418785  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:36.418807  319301 cri.go:96] found id: ""
	I1227 20:10:36.418815  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:36.418871  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:36.422631  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:36.422709  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:36.452773  319301 cri.go:96] found id: ""
	I1227 20:10:36.452799  319301 logs.go:282] 0 containers: []
	W1227 20:10:36.452807  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:36.452814  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:36.452873  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:36.478409  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:36.478432  319301 cri.go:96] found id: ""
	I1227 20:10:36.478440  319301 logs.go:282] 1 containers: [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:36.478515  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:36.482226  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:36.482329  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:36.510113  319301 cri.go:96] found id: ""
	I1227 20:10:36.510139  319301 logs.go:282] 0 containers: []
	W1227 20:10:36.510148  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:36.510162  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:36.510206  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:36.528485  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:36.528518  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:36.596104  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:36.586542    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.587371    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.589128    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.589804    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.591834    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:36.586542    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.587371    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.589128    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.589804    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:36.591834    2507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:36.596128  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:36.596153  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:36.656568  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:36.656646  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:36.685002  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:36.685040  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:36.719044  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:36.719072  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:36.815628  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:36.815664  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:36.845372  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:36.845407  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:36.892923  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:36.892962  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:36.920168  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:36.920205  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:39.498228  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:39.509127  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:39.509200  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:39.535429  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:39.535450  319301 cri.go:96] found id: ""
	I1227 20:10:39.535458  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:39.535511  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.539036  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:39.539115  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:39.565370  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:39.565395  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:39.565401  319301 cri.go:96] found id: ""
	I1227 20:10:39.565411  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:39.565505  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.569317  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.572838  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:39.572913  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:39.600208  319301 cri.go:96] found id: ""
	I1227 20:10:39.600233  319301 logs.go:282] 0 containers: []
	W1227 20:10:39.600243  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:39.600249  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:39.600359  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:39.627924  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:39.627947  319301 cri.go:96] found id: ""
	I1227 20:10:39.627955  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:39.628038  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.631825  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:39.631929  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:39.670875  319301 cri.go:96] found id: ""
	I1227 20:10:39.670898  319301 logs.go:282] 0 containers: []
	W1227 20:10:39.670907  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:39.670949  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:39.671032  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:39.698935  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:39.698963  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:39.698968  319301 cri.go:96] found id: ""
	I1227 20:10:39.698976  319301 logs.go:282] 2 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:39.699057  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.702755  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:39.706280  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:39.706367  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:39.732144  319301 cri.go:96] found id: ""
	I1227 20:10:39.732171  319301 logs.go:282] 0 containers: []
	W1227 20:10:39.732192  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:39.732202  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:39.732218  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:39.833062  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:39.833097  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:39.851039  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:39.851169  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:39.936210  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:39.936253  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:40.017614  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:40.018998  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:40.077844  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:40.077881  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:40.191560  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:40.191604  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:40.229430  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:40.229483  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:40.316177  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:40.307077    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.308580    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.309399    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.310789    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.312661    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:40.307077    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.308580    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.309399    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.310789    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:40.312661    2686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:40.316202  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:40.316215  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:40.351544  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:40.351584  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:40.379852  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:40.379880  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:42.911718  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:42.922519  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:42.922590  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:42.949680  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:42.949705  319301 cri.go:96] found id: ""
	I1227 20:10:42.949714  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:42.949773  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:42.953773  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:42.953858  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:42.986307  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:42.986333  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:42.986340  319301 cri.go:96] found id: ""
	I1227 20:10:42.986347  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:42.986401  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:42.989939  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:42.993412  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:42.993511  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:43.027198  319301 cri.go:96] found id: ""
	I1227 20:10:43.027224  319301 logs.go:282] 0 containers: []
	W1227 20:10:43.027244  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:43.027251  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:43.027314  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:43.054716  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:43.054739  319301 cri.go:96] found id: ""
	I1227 20:10:43.054748  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:43.054803  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:43.059284  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:43.059357  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:43.093962  319301 cri.go:96] found id: ""
	I1227 20:10:43.093986  319301 logs.go:282] 0 containers: []
	W1227 20:10:43.093995  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:43.094002  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:43.094060  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:43.122219  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:43.122257  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:43.122263  319301 cri.go:96] found id: ""
	I1227 20:10:43.122270  319301 logs.go:282] 2 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:43.122337  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:43.126232  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:43.129862  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:43.129978  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:43.156857  319301 cri.go:96] found id: ""
	I1227 20:10:43.156882  319301 logs.go:282] 0 containers: []
	W1227 20:10:43.156891  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:43.156901  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:43.156914  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:43.174975  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:43.175005  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:43.219964  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:43.220004  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:43.245562  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:43.245591  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:43.276688  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:43.276770  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:43.358338  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:43.358380  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:43.402206  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:43.402234  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:43.499249  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:43.499289  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:43.576572  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:43.568454    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.569067    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.570849    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.571386    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.572871    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:43.568454    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.569067    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.570849    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.571386    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:43.572871    2821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:43.576591  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:43.576605  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:43.604599  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:43.604686  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:43.650961  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:43.651038  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:46.181580  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:46.192165  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:46.192233  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:46.218480  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:46.218500  319301 cri.go:96] found id: ""
	I1227 20:10:46.218509  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:46.218563  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.222189  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:46.222263  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:46.253302  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:46.253327  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:46.253332  319301 cri.go:96] found id: ""
	I1227 20:10:46.253340  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:46.253398  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.257309  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.260898  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:46.260974  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:46.289145  319301 cri.go:96] found id: ""
	I1227 20:10:46.289218  319301 logs.go:282] 0 containers: []
	W1227 20:10:46.289241  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:46.289262  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:46.289352  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:46.318927  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:46.318948  319301 cri.go:96] found id: ""
	I1227 20:10:46.318956  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:46.319015  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.322605  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:46.322674  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:46.354035  319301 cri.go:96] found id: ""
	I1227 20:10:46.354061  319301 logs.go:282] 0 containers: []
	W1227 20:10:46.354071  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:46.354077  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:46.354168  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:46.384710  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:46.384734  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:46.384740  319301 cri.go:96] found id: ""
	I1227 20:10:46.384748  319301 logs.go:282] 2 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:46.384803  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.388496  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:46.392532  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:46.392611  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:46.421588  319301 cri.go:96] found id: ""
	I1227 20:10:46.421664  319301 logs.go:282] 0 containers: []
	W1227 20:10:46.421686  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:46.421709  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:46.421746  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:46.439228  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:46.439330  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:46.484770  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:46.484806  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:46.519247  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:46.519273  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:46.597066  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:46.597101  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:46.634009  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:46.634040  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:46.701472  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:46.693690    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.694466    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.695987    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.696422    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.697877    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:46.693690    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.694466    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.695987    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.696422    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:46.697877    2945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:46.701496  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:46.701512  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:46.729296  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:46.729326  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:46.774639  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:46.774678  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:46.799969  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:46.800005  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:46.826163  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:46.826192  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:49.429141  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:49.439610  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:49.439705  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:49.470260  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:49.470283  319301 cri.go:96] found id: ""
	I1227 20:10:49.470292  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:49.470350  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.474256  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:49.474343  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:49.501740  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:49.501762  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:49.501767  319301 cri.go:96] found id: ""
	I1227 20:10:49.501774  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:49.501850  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.505843  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.509390  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:49.509489  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:49.543998  319301 cri.go:96] found id: ""
	I1227 20:10:49.544022  319301 logs.go:282] 0 containers: []
	W1227 20:10:49.544041  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:49.544049  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:49.544107  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:49.570494  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:49.570517  319301 cri.go:96] found id: ""
	I1227 20:10:49.570525  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:49.570581  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.574401  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:49.574471  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:49.603448  319301 cri.go:96] found id: ""
	I1227 20:10:49.603475  319301 logs.go:282] 0 containers: []
	W1227 20:10:49.603486  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:49.603500  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:49.603573  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:49.633356  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:49.633379  319301 cri.go:96] found id: "8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:49.633385  319301 cri.go:96] found id: ""
	I1227 20:10:49.633392  319301 logs.go:282] 2 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a]
	I1227 20:10:49.633474  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.637216  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:49.641370  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:49.641472  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:49.669518  319301 cri.go:96] found id: ""
	I1227 20:10:49.669557  319301 logs.go:282] 0 containers: []
	W1227 20:10:49.669567  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:49.669576  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:49.669588  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:49.696361  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:49.696389  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:49.721155  319301 logs.go:123] Gathering logs for kube-controller-manager [8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a] ...
	I1227 20:10:49.721184  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8638905cb0e3ac43ad600322d5e92cb3910cbf601791bc348a565562c161724a"
	I1227 20:10:49.753420  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:49.753489  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:49.832989  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:49.833025  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:49.874986  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:49.875013  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:49.978286  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:49.978321  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:49.997322  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:49.997351  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:50.080526  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:50.072015    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.072678    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.074595    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.075259    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.076874    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:50.072015    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.072678    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.074595    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.075259    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:50.076874    3096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:50.080546  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:50.080560  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:50.139866  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:50.139902  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:50.184649  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:50.184682  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:52.713968  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:52.726778  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:52.726855  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:52.758017  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:52.758040  319301 cri.go:96] found id: ""
	I1227 20:10:52.758049  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:52.758104  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:52.761780  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:52.761855  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:52.789053  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:52.789076  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:52.789081  319301 cri.go:96] found id: ""
	I1227 20:10:52.789088  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:52.789140  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:52.792812  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:52.796144  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:52.796211  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:52.825853  319301 cri.go:96] found id: ""
	I1227 20:10:52.825883  319301 logs.go:282] 0 containers: []
	W1227 20:10:52.825892  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:52.825898  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:52.825955  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:52.851800  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:52.851820  319301 cri.go:96] found id: ""
	I1227 20:10:52.851828  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:52.851881  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:52.855382  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:52.855455  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:52.885699  319301 cri.go:96] found id: ""
	I1227 20:10:52.885721  319301 logs.go:282] 0 containers: []
	W1227 20:10:52.885736  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:52.885742  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:52.885800  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:52.911251  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:52.911316  319301 cri.go:96] found id: ""
	I1227 20:10:52.911339  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:10:52.911402  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:52.914760  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:52.914841  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:52.939685  319301 cri.go:96] found id: ""
	I1227 20:10:52.939718  319301 logs.go:282] 0 containers: []
	W1227 20:10:52.939728  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:52.939742  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:52.939789  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:53.033951  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:53.033990  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:53.052877  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:53.052906  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:53.096670  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:53.096715  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:53.128695  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:53.128722  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:53.161100  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:53.161130  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:53.227545  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:53.218833    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.219420    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.221028    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.221951    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.223525    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:53.218833    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.219420    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.221028    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.221951    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:53.223525    3221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:53.227617  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:53.227640  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:53.255984  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:53.256125  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:53.313035  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:53.313074  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:53.338975  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:53.339057  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:55.915383  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:55.925492  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:55.925565  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:55.952010  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:55.952028  319301 cri.go:96] found id: ""
	I1227 20:10:55.952037  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:55.952092  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:55.955593  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:55.955667  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:55.986538  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:55.986561  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:55.986567  319301 cri.go:96] found id: ""
	I1227 20:10:55.986574  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:55.986628  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:55.990714  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:55.995050  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:55.995121  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:56.024488  319301 cri.go:96] found id: ""
	I1227 20:10:56.024565  319301 logs.go:282] 0 containers: []
	W1227 20:10:56.024588  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:56.024612  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:56.024696  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:56.056966  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:56.057039  319301 cri.go:96] found id: ""
	I1227 20:10:56.057065  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:56.057155  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:56.061997  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:56.062234  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:56.089345  319301 cri.go:96] found id: ""
	I1227 20:10:56.089372  319301 logs.go:282] 0 containers: []
	W1227 20:10:56.089381  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:56.089388  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:56.089488  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:56.117758  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:56.117782  319301 cri.go:96] found id: ""
	I1227 20:10:56.117790  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:10:56.117845  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:56.121319  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:56.121432  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:56.147067  319301 cri.go:96] found id: ""
	I1227 20:10:56.147092  319301 logs.go:282] 0 containers: []
	W1227 20:10:56.147102  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:56.147115  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:56.147130  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:56.224179  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:56.224218  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:10:56.256694  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:56.256721  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:56.283858  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:56.283889  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:56.353505  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:56.342078    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.343458    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.346135    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.347096    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.347948    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:56.342078    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.343458    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.346135    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.347096    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:56.347948    3331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:56.353534  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:56.353548  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:56.399836  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:56.399870  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:56.494637  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:56.494677  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:56.528262  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:56.528292  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:56.577163  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:56.577198  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:56.605916  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:56.605945  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:59.134704  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:10:59.144988  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:10:59.145094  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:10:59.170826  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:59.170846  319301 cri.go:96] found id: ""
	I1227 20:10:59.170859  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:10:59.170916  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:59.174542  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:10:59.174618  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:10:59.204712  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:59.204734  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:59.204738  319301 cri.go:96] found id: ""
	I1227 20:10:59.204746  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:10:59.204800  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:59.208625  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:59.212119  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:10:59.212200  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:10:59.241075  319301 cri.go:96] found id: ""
	I1227 20:10:59.241150  319301 logs.go:282] 0 containers: []
	W1227 20:10:59.241174  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:10:59.241195  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:10:59.241312  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:10:59.277168  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:59.277252  319301 cri.go:96] found id: ""
	I1227 20:10:59.277274  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:10:59.277366  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:59.281934  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:10:59.282029  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:10:59.307601  319301 cri.go:96] found id: ""
	I1227 20:10:59.307627  319301 logs.go:282] 0 containers: []
	W1227 20:10:59.307636  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:10:59.307643  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:10:59.307704  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:10:59.341899  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:59.341923  319301 cri.go:96] found id: ""
	I1227 20:10:59.341931  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:10:59.341999  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:10:59.345734  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:10:59.345844  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:10:59.371593  319301 cri.go:96] found id: ""
	I1227 20:10:59.371661  319301 logs.go:282] 0 containers: []
	W1227 20:10:59.371683  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:10:59.371716  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:10:59.371755  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:10:59.464618  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:10:59.464654  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:10:59.483758  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:10:59.483793  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:10:59.555654  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:10:59.546856    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.547308    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.548491    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.548938    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.550344    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:10:59.546856    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.547308    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.548491    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.548938    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:10:59.550344    3447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:10:59.555678  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:10:59.555696  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:10:59.583971  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:10:59.584004  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:10:59.635084  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:10:59.635118  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:10:59.662345  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:10:59.662375  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:10:59.726915  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:10:59.726950  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:10:59.754060  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:10:59.754094  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:10:59.836493  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:10:59.836534  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:02.376222  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:02.386794  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:02.386868  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:02.419031  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:02.419054  319301 cri.go:96] found id: ""
	I1227 20:11:02.419062  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:02.419118  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:02.423033  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:02.423106  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:02.448867  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:02.448891  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:02.448896  319301 cri.go:96] found id: ""
	I1227 20:11:02.448903  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:02.448957  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:02.452561  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:02.455963  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:02.456070  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:02.484254  319301 cri.go:96] found id: ""
	I1227 20:11:02.484281  319301 logs.go:282] 0 containers: []
	W1227 20:11:02.484290  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:02.484297  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:02.484357  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:02.511483  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:02.511506  319301 cri.go:96] found id: ""
	I1227 20:11:02.511515  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:02.511580  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:02.515291  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:02.515364  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:02.542839  319301 cri.go:96] found id: ""
	I1227 20:11:02.542866  319301 logs.go:282] 0 containers: []
	W1227 20:11:02.542886  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:02.542894  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:02.543025  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:02.576471  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:02.576505  319301 cri.go:96] found id: ""
	I1227 20:11:02.576519  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:02.576578  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:02.580126  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:02.580205  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:02.610225  319301 cri.go:96] found id: ""
	I1227 20:11:02.610252  319301 logs.go:282] 0 containers: []
	W1227 20:11:02.610261  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:02.610275  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:02.610316  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:02.640738  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:02.640766  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:02.688087  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:02.688120  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:02.714149  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:02.714175  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:02.743134  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:02.743161  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:02.822169  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:02.822206  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:02.894561  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:02.894595  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:02.936069  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:02.936096  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:03.036539  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:03.036573  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:03.054449  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:03.054480  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:03.132045  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:03.124246    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.125028    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.126504    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.127054    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.128486    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:03.124246    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.125028    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.126504    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.127054    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:03.128486    3628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:05.633596  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:05.644441  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:05.644564  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:05.671495  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:05.671520  319301 cri.go:96] found id: ""
	I1227 20:11:05.671528  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:05.671603  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:05.675058  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:05.675148  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:05.699421  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:05.699443  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:05.699448  319301 cri.go:96] found id: ""
	I1227 20:11:05.699456  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:05.699512  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:05.703223  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:05.706661  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:05.706747  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:05.731295  319301 cri.go:96] found id: ""
	I1227 20:11:05.731319  319301 logs.go:282] 0 containers: []
	W1227 20:11:05.731328  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:05.731334  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:05.731409  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:05.758394  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:05.758427  319301 cri.go:96] found id: ""
	I1227 20:11:05.758435  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:05.758500  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:05.762213  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:05.762304  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:05.788439  319301 cri.go:96] found id: ""
	I1227 20:11:05.788465  319301 logs.go:282] 0 containers: []
	W1227 20:11:05.788473  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:05.788480  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:05.788546  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:05.814115  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:05.814137  319301 cri.go:96] found id: ""
	I1227 20:11:05.814145  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:05.814199  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:05.817823  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:05.817893  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:05.844939  319301 cri.go:96] found id: ""
	I1227 20:11:05.844963  319301 logs.go:282] 0 containers: []
	W1227 20:11:05.844973  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:05.844988  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:05.845002  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:05.863023  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:05.863054  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:05.932754  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:05.924777    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.925338    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.926988    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.927561    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.928952    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:05.924777    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.925338    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.926988    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.927561    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:05.928952    3704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:05.932785  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:05.932802  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:05.960574  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:05.960604  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:06.004048  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:06.004082  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:06.055406  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:06.055441  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:06.082613  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:06.082643  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:06.115617  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:06.115646  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:06.149699  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:06.149729  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:06.250917  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:06.250950  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:08.830917  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:08.841316  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:08.841404  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:08.871386  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:08.871407  319301 cri.go:96] found id: ""
	I1227 20:11:08.871415  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:08.871483  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:08.875249  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:08.875334  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:08.905155  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:08.905178  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:08.905182  319301 cri.go:96] found id: ""
	I1227 20:11:08.905189  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:08.905256  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:08.909157  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:08.912623  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:08.912696  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:08.940125  319301 cri.go:96] found id: ""
	I1227 20:11:08.940151  319301 logs.go:282] 0 containers: []
	W1227 20:11:08.940161  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:08.940168  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:08.940228  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:08.979078  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:08.979099  319301 cri.go:96] found id: ""
	I1227 20:11:08.979115  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:08.979172  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:08.982993  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:08.983079  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:09.010456  319301 cri.go:96] found id: ""
	I1227 20:11:09.010482  319301 logs.go:282] 0 containers: []
	W1227 20:11:09.010491  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:09.010498  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:09.010559  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:09.046193  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:09.046226  319301 cri.go:96] found id: ""
	I1227 20:11:09.046235  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:09.046293  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:09.050361  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:09.050429  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:09.076865  319301 cri.go:96] found id: ""
	I1227 20:11:09.076892  319301 logs.go:282] 0 containers: []
	W1227 20:11:09.076901  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:09.076917  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:09.076929  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:09.103766  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:09.103793  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:09.121384  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:09.121412  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:09.190959  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:09.182712    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.183470    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.185037    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.185570    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.187248    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:09.182712    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.183470    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.185037    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.185570    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:09.187248    3839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:09.191026  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:09.191058  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:09.238609  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:09.238648  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:09.332804  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:09.332844  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:09.374845  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:09.374874  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:09.475731  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:09.475770  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:09.505046  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:09.505075  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:09.550742  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:09.550779  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:12.077490  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:12.089114  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:12.089187  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:12.117965  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:12.117987  319301 cri.go:96] found id: ""
	I1227 20:11:12.117995  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:12.118048  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:12.121654  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:12.121727  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:12.150616  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:12.150645  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:12.150650  319301 cri.go:96] found id: ""
	I1227 20:11:12.150658  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:12.150714  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:12.154526  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:12.157975  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:12.158059  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:12.188379  319301 cri.go:96] found id: ""
	I1227 20:11:12.188406  319301 logs.go:282] 0 containers: []
	W1227 20:11:12.188415  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:12.188421  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:12.188479  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:12.214099  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:12.214125  319301 cri.go:96] found id: ""
	I1227 20:11:12.214134  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:12.214187  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:12.217805  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:12.217871  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:12.244974  319301 cri.go:96] found id: ""
	I1227 20:11:12.244999  319301 logs.go:282] 0 containers: []
	W1227 20:11:12.245008  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:12.245015  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:12.245071  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:12.281031  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:12.281071  319301 cri.go:96] found id: ""
	I1227 20:11:12.281079  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:12.281146  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:12.284926  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:12.285004  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:12.311055  319301 cri.go:96] found id: ""
	I1227 20:11:12.311079  319301 logs.go:282] 0 containers: []
	W1227 20:11:12.311088  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:12.311101  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:12.311113  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:12.330032  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:12.330065  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:12.359973  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:12.360000  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:12.405129  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:12.405163  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:12.460783  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:12.460817  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:12.488201  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:12.488230  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:12.565465  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:12.565502  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:12.662969  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:12.663007  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:12.735836  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:12.727495    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.728366    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.730010    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.730324    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.731834    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:12.727495    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.728366    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.730010    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.730324    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:12.731834    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:12.735859  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:12.735872  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:12.763143  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:12.763168  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:15.305823  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:15.318015  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:15.318113  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:15.347994  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:15.348017  319301 cri.go:96] found id: ""
	I1227 20:11:15.348026  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:15.348089  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:15.351955  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:15.352056  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:15.378004  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:15.378026  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:15.378031  319301 cri.go:96] found id: ""
	I1227 20:11:15.378038  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:15.378091  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:15.381599  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:15.384824  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:15.384889  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:15.409597  319301 cri.go:96] found id: ""
	I1227 20:11:15.409673  319301 logs.go:282] 0 containers: []
	W1227 20:11:15.409695  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:15.409716  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:15.409805  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:15.436026  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:15.436091  319301 cri.go:96] found id: ""
	I1227 20:11:15.436114  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:15.436205  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:15.439709  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:15.439776  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:15.472950  319301 cri.go:96] found id: ""
	I1227 20:11:15.472974  319301 logs.go:282] 0 containers: []
	W1227 20:11:15.472983  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:15.472990  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:15.473047  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:15.503060  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:15.503083  319301 cri.go:96] found id: ""
	I1227 20:11:15.503092  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:15.503166  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:15.506772  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:15.506841  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:15.531805  319301 cri.go:96] found id: ""
	I1227 20:11:15.531828  319301 logs.go:282] 0 containers: []
	W1227 20:11:15.531837  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:15.531849  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:15.531861  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:15.557217  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:15.557253  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:15.583522  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:15.583550  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:15.646957  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:15.646994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:15.677573  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:15.677601  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:15.763080  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:15.763117  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:15.795445  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:15.795473  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:15.895027  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:15.895063  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:15.914036  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:15.914065  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:15.990029  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:15.981434    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.982226    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.983747    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.984333    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.986074    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:15.981434    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.982226    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.983747    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.984333    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:15.986074    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:15.990048  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:15.990061  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:18.535347  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:18.545638  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:18.545712  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:18.573096  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:18.573125  319301 cri.go:96] found id: ""
	I1227 20:11:18.573135  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:18.573190  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:18.577413  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:18.577512  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:18.604633  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:18.604657  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:18.604662  319301 cri.go:96] found id: ""
	I1227 20:11:18.604670  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:18.604724  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:18.610098  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:18.613744  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:18.613821  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:18.645090  319301 cri.go:96] found id: ""
	I1227 20:11:18.645116  319301 logs.go:282] 0 containers: []
	W1227 20:11:18.645126  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:18.645132  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:18.645191  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:18.671681  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:18.671705  319301 cri.go:96] found id: ""
	I1227 20:11:18.671713  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:18.671768  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:18.675284  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:18.675356  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:18.701086  319301 cri.go:96] found id: ""
	I1227 20:11:18.701109  319301 logs.go:282] 0 containers: []
	W1227 20:11:18.701117  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:18.701123  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:18.701183  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:18.733157  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:18.733176  319301 cri.go:96] found id: ""
	I1227 20:11:18.733185  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:18.733237  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:18.736898  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:18.736978  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:18.761319  319301 cri.go:96] found id: ""
	I1227 20:11:18.761340  319301 logs.go:282] 0 containers: []
	W1227 20:11:18.761349  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:18.761362  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:18.761374  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:18.793077  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:18.793104  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:18.819425  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:18.819453  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:18.859846  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:18.859919  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:18.938269  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:18.938303  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:19.040817  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:19.040856  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:19.059170  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:19.059202  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:19.132074  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:19.121248    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.122916    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.123583    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.125207    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.125782    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:19.121248    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.122916    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.123583    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.125207    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:19.125782    4246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:19.132096  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:19.132111  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:19.179880  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:19.179916  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:19.223928  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:19.223963  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:21.759181  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:21.769762  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:21.769833  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:21.800302  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:21.800323  319301 cri.go:96] found id: ""
	I1227 20:11:21.800332  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:21.800395  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:21.804375  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:21.804458  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:21.830687  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:21.830711  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:21.830717  319301 cri.go:96] found id: ""
	I1227 20:11:21.830724  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:21.830779  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:21.834661  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:21.838097  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:21.838198  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:21.864157  319301 cri.go:96] found id: ""
	I1227 20:11:21.864183  319301 logs.go:282] 0 containers: []
	W1227 20:11:21.864193  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:21.864199  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:21.864292  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:21.890722  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:21.890747  319301 cri.go:96] found id: ""
	I1227 20:11:21.890756  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:21.890812  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:21.894377  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:21.894447  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:21.921902  319301 cri.go:96] found id: ""
	I1227 20:11:21.921932  319301 logs.go:282] 0 containers: []
	W1227 20:11:21.921941  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:21.921948  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:21.922013  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:21.948157  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:21.948181  319301 cri.go:96] found id: ""
	I1227 20:11:21.948190  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:21.948246  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:21.951860  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:21.951928  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:21.979147  319301 cri.go:96] found id: ""
	I1227 20:11:21.979171  319301 logs.go:282] 0 containers: []
	W1227 20:11:21.979181  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:21.979222  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:21.979242  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:22.077716  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:22.077768  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:22.161527  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:22.149113    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.149745    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.154386    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.154984    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.157780    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:22.149113    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.149745    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.154386    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.154984    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:22.157780    4345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:22.161553  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:22.161566  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:22.193359  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:22.193386  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:22.247574  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:22.247611  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:22.302993  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:22.303034  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:22.332035  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:22.332064  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:22.358225  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:22.358265  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:22.437089  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:22.437124  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:22.455750  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:22.455781  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:24.990837  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:25.001120  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:25.001190  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:25.040369  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:25.040388  319301 cri.go:96] found id: ""
	I1227 20:11:25.040396  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:25.040452  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:25.044321  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:25.044388  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:25.075240  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:25.075264  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:25.075268  319301 cri.go:96] found id: ""
	I1227 20:11:25.075276  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:25.075331  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:25.079221  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:25.083046  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:25.083117  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:25.111437  319301 cri.go:96] found id: ""
	I1227 20:11:25.111466  319301 logs.go:282] 0 containers: []
	W1227 20:11:25.111475  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:25.111482  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:25.111540  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:25.139474  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:25.139498  319301 cri.go:96] found id: ""
	I1227 20:11:25.139507  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:25.139572  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:25.143469  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:25.143540  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:25.177080  319301 cri.go:96] found id: ""
	I1227 20:11:25.177103  319301 logs.go:282] 0 containers: []
	W1227 20:11:25.177112  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:25.177119  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:25.177235  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:25.204123  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:25.204146  319301 cri.go:96] found id: ""
	I1227 20:11:25.204155  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:25.204238  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:25.207906  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:25.207978  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:25.233127  319301 cri.go:96] found id: ""
	I1227 20:11:25.233150  319301 logs.go:282] 0 containers: []
	W1227 20:11:25.233160  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:25.233175  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:25.233187  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:25.252764  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:25.252793  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:25.302886  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:25.302924  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:25.327231  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:25.327259  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:25.357720  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:25.357749  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:25.396486  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:25.396513  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:25.469872  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:25.461875    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.462332    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.464006    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.464571    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.466153    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:25.461875    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.462332    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.464006    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.464571    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:25.466153    4511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:25.469894  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:25.469907  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:25.498176  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:25.498204  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:25.547245  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:25.547279  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:25.629600  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:25.629639  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:28.230549  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:28.241564  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:28.241641  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:28.279080  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:28.279110  319301 cri.go:96] found id: ""
	I1227 20:11:28.279119  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:28.279185  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:28.284314  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:28.284405  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:28.316322  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:28.316389  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:28.316408  319301 cri.go:96] found id: ""
	I1227 20:11:28.316436  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:28.316522  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:28.320358  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:28.323910  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:28.324004  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:28.354101  319301 cri.go:96] found id: ""
	I1227 20:11:28.354172  319301 logs.go:282] 0 containers: []
	W1227 20:11:28.354195  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:28.354221  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:28.354308  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:28.381894  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:28.381933  319301 cri.go:96] found id: ""
	I1227 20:11:28.381944  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:28.382007  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:28.385565  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:28.385640  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:28.412036  319301 cri.go:96] found id: ""
	I1227 20:11:28.412063  319301 logs.go:282] 0 containers: []
	W1227 20:11:28.412072  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:28.412079  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:28.412136  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:28.437133  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:28.437154  319301 cri.go:96] found id: ""
	I1227 20:11:28.437162  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:28.437216  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:28.440922  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:28.441006  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:28.469470  319301 cri.go:96] found id: ""
	I1227 20:11:28.469495  319301 logs.go:282] 0 containers: []
	W1227 20:11:28.469505  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:28.469518  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:28.469531  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:28.512248  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:28.512281  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:28.538806  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:28.538834  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:28.615719  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:28.615756  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:28.651963  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:28.651992  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:28.753577  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:28.753616  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:28.770745  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:28.770778  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:28.798843  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:28.798878  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:28.867106  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:28.858730    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.859584    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.861103    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.861408    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.863356    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:28.858730    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.859584    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.861103    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.861408    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:28.863356    4641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:28.867124  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:28.867137  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:28.897868  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:28.897897  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:31.455673  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:31.466341  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:31.466412  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:31.494286  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:31.494305  319301 cri.go:96] found id: ""
	I1227 20:11:31.494312  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:31.494368  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:31.499152  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:31.499229  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:31.525626  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:31.525647  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:31.525651  319301 cri.go:96] found id: ""
	I1227 20:11:31.525666  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:31.525721  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:31.529291  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:31.532543  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:31.532612  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:31.558153  319301 cri.go:96] found id: ""
	I1227 20:11:31.558178  319301 logs.go:282] 0 containers: []
	W1227 20:11:31.558187  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:31.558193  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:31.558274  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:31.585024  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:31.585047  319301 cri.go:96] found id: ""
	I1227 20:11:31.585055  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:31.585109  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:31.588772  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:31.588841  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:31.615373  319301 cri.go:96] found id: ""
	I1227 20:11:31.615398  319301 logs.go:282] 0 containers: []
	W1227 20:11:31.615408  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:31.615414  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:31.615474  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:31.644548  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:31.644571  319301 cri.go:96] found id: ""
	I1227 20:11:31.644579  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:31.644634  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:31.648326  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:31.648396  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:31.674106  319301 cri.go:96] found id: ""
	I1227 20:11:31.674128  319301 logs.go:282] 0 containers: []
	W1227 20:11:31.674137  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:31.674152  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:31.674165  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:31.769885  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:31.769924  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:31.787798  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:31.787829  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:31.840240  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:31.840276  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:31.883880  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:31.883914  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:31.912615  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:31.912645  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:31.993762  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:31.993796  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:32.038771  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:32.038807  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:32.113504  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:32.105141    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.106007    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.106783    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.108406    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.108703    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:32.105141    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.106007    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.106783    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.108406    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:32.108703    4770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:32.113531  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:32.113545  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:32.145482  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:32.145508  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:34.675972  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:34.687181  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:34.687251  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:34.713741  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:34.713768  319301 cri.go:96] found id: ""
	I1227 20:11:34.713776  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:34.713837  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:34.717422  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:34.717525  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:34.742801  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:34.742824  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:34.742829  319301 cri.go:96] found id: ""
	I1227 20:11:34.742836  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:34.742890  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:34.746901  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:34.750347  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:34.750438  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:34.776122  319301 cri.go:96] found id: ""
	I1227 20:11:34.776156  319301 logs.go:282] 0 containers: []
	W1227 20:11:34.776165  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:34.776173  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:34.776241  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:34.801663  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:34.801687  319301 cri.go:96] found id: ""
	I1227 20:11:34.801696  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:34.801752  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:34.805521  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:34.805600  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:34.839033  319301 cri.go:96] found id: ""
	I1227 20:11:34.839059  319301 logs.go:282] 0 containers: []
	W1227 20:11:34.839068  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:34.839075  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:34.839164  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:34.875359  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:34.875380  319301 cri.go:96] found id: ""
	I1227 20:11:34.875389  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:34.875444  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:34.879108  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:34.879203  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:34.904808  319301 cri.go:96] found id: ""
	I1227 20:11:34.904831  319301 logs.go:282] 0 containers: []
	W1227 20:11:34.904839  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:34.904882  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:34.904902  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:35.001157  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:35.001197  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:35.036396  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:35.036492  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:35.100412  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:35.100452  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:35.130486  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:35.130514  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:35.212133  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:35.212170  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:35.261425  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:35.261489  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:35.279972  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:35.280002  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:35.344789  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:35.336875    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.337423    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.338974    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.339514    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.340959    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:35.336875    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.337423    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.338974    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.339514    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:35.340959    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:35.344811  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:35.344826  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:35.388398  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:35.388438  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:37.916139  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:37.926579  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:37.926656  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:37.957965  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:37.957990  319301 cri.go:96] found id: ""
	I1227 20:11:37.958011  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:37.958064  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:37.961819  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:37.961939  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:37.990732  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:37.990756  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:37.990763  319301 cri.go:96] found id: ""
	I1227 20:11:37.990774  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:37.990832  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:37.994865  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:37.998563  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:37.998657  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:38.029180  319301 cri.go:96] found id: ""
	I1227 20:11:38.029206  319301 logs.go:282] 0 containers: []
	W1227 20:11:38.029228  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:38.029235  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:38.029302  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:38.058262  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:38.058287  319301 cri.go:96] found id: ""
	I1227 20:11:38.058295  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:38.058390  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:38.062798  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:38.062895  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:38.093594  319301 cri.go:96] found id: ""
	I1227 20:11:38.093630  319301 logs.go:282] 0 containers: []
	W1227 20:11:38.093641  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:38.093647  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:38.093723  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:38.122677  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:38.122700  319301 cri.go:96] found id: ""
	I1227 20:11:38.122710  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:38.122784  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:38.126481  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:38.126556  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:38.152399  319301 cri.go:96] found id: ""
	I1227 20:11:38.152425  319301 logs.go:282] 0 containers: []
	W1227 20:11:38.152434  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:38.152447  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:38.152459  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:38.169834  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:38.169865  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:38.236553  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:38.228832    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.229398    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.230976    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.231455    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.232939    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:38.228832    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.229398    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.230976    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.231455    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:38.232939    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:38.236574  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:38.236587  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:38.283907  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:38.283942  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:38.327559  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:38.327595  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:38.354915  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:38.354944  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:38.385535  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:38.385567  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:38.482920  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:38.482955  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:38.513709  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:38.513737  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:38.541063  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:38.541092  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:41.120061  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:41.130482  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:41.130560  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:41.157933  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:41.157995  319301 cri.go:96] found id: ""
	I1227 20:11:41.158011  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:41.158068  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:41.161515  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:41.161587  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:41.186761  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:41.186784  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:41.186789  319301 cri.go:96] found id: ""
	I1227 20:11:41.186796  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:41.186853  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:41.190548  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:41.194929  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:41.195019  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:41.225573  319301 cri.go:96] found id: ""
	I1227 20:11:41.225600  319301 logs.go:282] 0 containers: []
	W1227 20:11:41.225609  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:41.225615  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:41.225678  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:41.255736  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:41.255810  319301 cri.go:96] found id: ""
	I1227 20:11:41.255833  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:41.255924  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:41.259619  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:41.259730  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:41.293635  319301 cri.go:96] found id: ""
	I1227 20:11:41.293658  319301 logs.go:282] 0 containers: []
	W1227 20:11:41.293667  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:41.293674  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:41.293736  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:41.325226  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:41.325248  319301 cri.go:96] found id: ""
	I1227 20:11:41.325257  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:41.325311  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:41.328850  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:41.328919  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:41.356320  319301 cri.go:96] found id: ""
	I1227 20:11:41.356345  319301 logs.go:282] 0 containers: []
	W1227 20:11:41.356354  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:41.356370  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:41.356383  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:41.384750  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:41.384777  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:41.438279  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:41.438315  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:41.496771  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:41.496814  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:41.525343  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:41.525373  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:41.558207  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:41.558235  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:41.657075  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:41.657112  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:41.689798  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:41.689828  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:41.769585  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:41.769620  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:41.787874  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:41.787906  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:41.852555  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:41.844441    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.845015    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.846678    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.847233    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.849010    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:41.844441    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.845015    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.846678    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.847233    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:41.849010    5173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:44.353586  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:44.364496  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:44.364591  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:44.396750  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:44.396823  319301 cri.go:96] found id: ""
	I1227 20:11:44.396848  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:44.396920  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:44.400610  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:44.400687  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:44.428171  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:44.428250  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:44.428271  319301 cri.go:96] found id: ""
	I1227 20:11:44.428296  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:44.428411  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:44.432219  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:44.435828  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:44.435901  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:44.464904  319301 cri.go:96] found id: ""
	I1227 20:11:44.464931  319301 logs.go:282] 0 containers: []
	W1227 20:11:44.464953  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:44.464960  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:44.465019  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:44.494508  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:44.494537  319301 cri.go:96] found id: ""
	I1227 20:11:44.494546  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:44.494602  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:44.498485  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:44.498588  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:44.526221  319301 cri.go:96] found id: ""
	I1227 20:11:44.526249  319301 logs.go:282] 0 containers: []
	W1227 20:11:44.526258  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:44.526264  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:44.526337  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:44.557553  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:44.557629  319301 cri.go:96] found id: ""
	I1227 20:11:44.557644  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:44.557713  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:44.561435  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:44.561578  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:44.588202  319301 cri.go:96] found id: ""
	I1227 20:11:44.588227  319301 logs.go:282] 0 containers: []
	W1227 20:11:44.588236  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:44.588250  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:44.588281  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:44.636647  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:44.636688  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:44.715003  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:44.715041  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:44.746461  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:44.746488  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:44.840354  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:44.840392  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:44.910107  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:44.902375    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.902947    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.904566    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.905162    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.906700    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:44.902375    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.902947    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.904566    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.905162    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:44.906700    5264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:44.910127  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:44.910139  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:44.958123  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:44.958155  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:44.988455  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:44.988486  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:45.017637  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:45.017669  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:45.068015  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:45.068047  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:47.639577  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:47.650807  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:47.650879  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:47.680709  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:47.680780  319301 cri.go:96] found id: ""
	I1227 20:11:47.680801  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:47.680886  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:47.684862  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:47.684933  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:47.711503  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:47.711527  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:47.711533  319301 cri.go:96] found id: ""
	I1227 20:11:47.711541  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:47.711597  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:47.715323  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:47.718860  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:47.718939  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:47.745091  319301 cri.go:96] found id: ""
	I1227 20:11:47.745118  319301 logs.go:282] 0 containers: []
	W1227 20:11:47.745128  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:47.745134  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:47.745190  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:47.774661  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:47.774683  319301 cri.go:96] found id: ""
	I1227 20:11:47.774691  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:47.774751  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:47.778781  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:47.778879  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:47.805242  319301 cri.go:96] found id: ""
	I1227 20:11:47.805268  319301 logs.go:282] 0 containers: []
	W1227 20:11:47.805278  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:47.805284  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:47.805350  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:47.833172  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:47.833240  319301 cri.go:96] found id: ""
	I1227 20:11:47.833262  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:47.833351  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:47.837087  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:47.837159  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:47.865275  319301 cri.go:96] found id: ""
	I1227 20:11:47.865353  319301 logs.go:282] 0 containers: []
	W1227 20:11:47.865380  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:47.865432  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:47.865505  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:47.944986  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:47.945022  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:47.980482  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:47.980511  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:47.999608  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:47.999639  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:48.076328  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:48.067348    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.068343    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.070039    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.070763    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.072273    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:48.067348    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.068343    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.070039    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.070763    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:48.072273    5386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:48.076352  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:48.076365  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:48.102940  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:48.102968  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:48.195452  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:48.195490  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:48.225373  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:48.225402  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:48.273525  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:48.273604  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:48.325768  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:48.325805  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:50.855952  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:50.867387  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:50.867456  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:50.897533  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:50.897556  319301 cri.go:96] found id: ""
	I1227 20:11:50.897565  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:50.897617  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:50.900982  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:50.901048  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:50.935428  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:50.935450  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:50.935455  319301 cri.go:96] found id: ""
	I1227 20:11:50.935468  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:50.935521  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:50.939266  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:50.943149  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:50.943266  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:50.974808  319301 cri.go:96] found id: ""
	I1227 20:11:50.974842  319301 logs.go:282] 0 containers: []
	W1227 20:11:50.974852  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:50.974859  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:50.974928  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:51.001867  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:51.001890  319301 cri.go:96] found id: ""
	I1227 20:11:51.001899  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:51.001957  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:51.005758  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:51.005831  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:51.035904  319301 cri.go:96] found id: ""
	I1227 20:11:51.035979  319301 logs.go:282] 0 containers: []
	W1227 20:11:51.036002  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:51.036026  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:51.036134  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:51.064190  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:51.064213  319301 cri.go:96] found id: ""
	I1227 20:11:51.064222  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:51.064277  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:51.068971  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:51.069043  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:51.098066  319301 cri.go:96] found id: ""
	I1227 20:11:51.098092  319301 logs.go:282] 0 containers: []
	W1227 20:11:51.098101  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:51.098116  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:51.098128  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:51.193690  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:51.193731  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:51.236544  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:51.236578  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:51.275361  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:51.275397  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:51.309801  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:51.309827  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:51.327683  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:51.327711  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:51.401236  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:51.392227    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.393287    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.394285    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.395538    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.396222    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:51.392227    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.393287    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.394285    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.395538    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:51.396222    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:51.401259  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:51.401273  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:51.429955  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:51.429985  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:51.492625  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:51.492662  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:51.518481  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:51.518512  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:54.100065  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:54.111435  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:54.111510  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:54.142927  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:54.142956  319301 cri.go:96] found id: ""
	I1227 20:11:54.142975  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:54.143064  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:54.147093  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:54.147233  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:54.173813  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:54.173832  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:54.173837  319301 cri.go:96] found id: ""
	I1227 20:11:54.173844  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:54.173903  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:54.177570  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:54.181008  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:54.181079  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:54.206624  319301 cri.go:96] found id: ""
	I1227 20:11:54.206648  319301 logs.go:282] 0 containers: []
	W1227 20:11:54.206658  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:54.206664  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:54.206720  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:54.232185  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:54.232208  319301 cri.go:96] found id: ""
	I1227 20:11:54.232218  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:54.232281  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:54.236968  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:54.237047  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:54.266150  319301 cri.go:96] found id: ""
	I1227 20:11:54.266172  319301 logs.go:282] 0 containers: []
	W1227 20:11:54.266181  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:54.266187  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:54.266254  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:54.294800  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:54.294820  319301 cri.go:96] found id: ""
	I1227 20:11:54.294829  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:54.294880  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:54.298462  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:54.298526  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:54.323550  319301 cri.go:96] found id: ""
	I1227 20:11:54.323573  319301 logs.go:282] 0 containers: []
	W1227 20:11:54.323582  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:54.323599  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:54.323610  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:54.352757  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:54.352783  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:11:54.383438  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:54.383464  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:54.473431  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:54.473470  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:54.544121  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:54.535951    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.536753    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.538081    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.538522    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.540194    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:54.535951    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.536753    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.538081    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.538522    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:54.540194    5643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:54.544146  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:54.544162  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:54.587199  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:54.587231  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:54.625648  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:54.625675  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:54.708479  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:54.708513  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:54.727026  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:54.727055  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:54.758081  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:54.758110  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:57.311000  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:11:57.321234  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:11:57.321311  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:11:57.349011  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:57.349030  319301 cri.go:96] found id: ""
	I1227 20:11:57.349038  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:11:57.349091  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:57.353198  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:11:57.353266  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:11:57.378464  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:57.378489  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:57.378494  319301 cri.go:96] found id: ""
	I1227 20:11:57.378502  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:11:57.378564  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:57.382492  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:57.385894  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:11:57.385975  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:11:57.410564  319301 cri.go:96] found id: ""
	I1227 20:11:57.410629  319301 logs.go:282] 0 containers: []
	W1227 20:11:57.410642  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:11:57.410650  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:11:57.410708  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:11:57.437790  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:57.437814  319301 cri.go:96] found id: ""
	I1227 20:11:57.437823  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:11:57.437881  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:57.441526  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:11:57.441645  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:11:57.467252  319301 cri.go:96] found id: ""
	I1227 20:11:57.467319  319301 logs.go:282] 0 containers: []
	W1227 20:11:57.467334  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:11:57.467342  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:11:57.467406  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:11:57.495037  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:57.495058  319301 cri.go:96] found id: ""
	I1227 20:11:57.495067  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:11:57.495123  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:11:57.498778  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:11:57.498878  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:11:57.528106  319301 cri.go:96] found id: ""
	I1227 20:11:57.528133  319301 logs.go:282] 0 containers: []
	W1227 20:11:57.528142  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:11:57.528155  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:11:57.528168  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:11:57.619388  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:11:57.619424  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:11:57.650304  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:11:57.650332  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:11:57.699631  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:11:57.699667  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:11:57.743221  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:11:57.743254  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:11:57.769136  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:11:57.769164  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:11:57.786763  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:11:57.786790  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:11:57.859691  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:11:57.849669    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.850063    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.853911    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.854484    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.856001    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:11:57.849669    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.850063    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.853911    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.854484    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:11:57.856001    5788 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:11:57.859713  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:11:57.859728  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:11:57.884558  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:11:57.884586  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:11:57.961115  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:11:57.961152  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:00.497672  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:00.510050  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:00.510129  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:00.544933  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:00.544956  319301 cri.go:96] found id: ""
	I1227 20:12:00.544965  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:00.545025  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:00.549158  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:00.549233  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:00.576607  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:00.576630  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:00.576636  319301 cri.go:96] found id: ""
	I1227 20:12:00.576643  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:00.576700  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:00.580716  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:00.584708  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:00.584783  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:00.623469  319301 cri.go:96] found id: ""
	I1227 20:12:00.623492  319301 logs.go:282] 0 containers: []
	W1227 20:12:00.623501  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:00.623508  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:00.623567  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:00.650388  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:00.650460  319301 cri.go:96] found id: ""
	I1227 20:12:00.650476  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:00.650537  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:00.654531  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:00.654613  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:00.685179  319301 cri.go:96] found id: ""
	I1227 20:12:00.685206  319301 logs.go:282] 0 containers: []
	W1227 20:12:00.685215  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:00.685222  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:00.685283  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:00.716017  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:00.716036  319301 cri.go:96] found id: ""
	I1227 20:12:00.716045  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:00.716102  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:00.720897  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:00.720967  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:00.752084  319301 cri.go:96] found id: ""
	I1227 20:12:00.752108  319301 logs.go:282] 0 containers: []
	W1227 20:12:00.752118  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:00.752133  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:00.752145  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:00.779162  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:00.779191  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:00.828229  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:00.828268  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:00.854975  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:00.855005  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:00.883576  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:00.883606  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:00.965151  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:00.965192  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:01.067209  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:01.067248  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:01.085199  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:01.085232  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:01.155625  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:01.146876    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.148053    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.148721    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.149832    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.150397    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:01.146876    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.148053    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.148721    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.149832    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:01.150397    5917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:01.155647  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:01.155660  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:01.206940  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:01.206978  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:03.749679  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:03.760472  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:03.760548  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:03.788993  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:03.789016  319301 cri.go:96] found id: ""
	I1227 20:12:03.789024  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:03.789079  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:03.792725  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:03.792798  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:03.817942  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:03.817964  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:03.817969  319301 cri.go:96] found id: ""
	I1227 20:12:03.817975  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:03.818031  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:03.821717  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:03.825168  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:03.825254  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:03.851505  319301 cri.go:96] found id: ""
	I1227 20:12:03.851527  319301 logs.go:282] 0 containers: []
	W1227 20:12:03.851536  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:03.851542  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:03.851606  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:03.878946  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:03.878971  319301 cri.go:96] found id: ""
	I1227 20:12:03.878980  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:03.879043  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:03.883057  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:03.883130  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:03.911906  319301 cri.go:96] found id: ""
	I1227 20:12:03.911933  319301 logs.go:282] 0 containers: []
	W1227 20:12:03.911943  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:03.911950  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:03.912009  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:03.942160  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:03.942183  319301 cri.go:96] found id: ""
	I1227 20:12:03.942192  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:03.942252  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:03.946415  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:03.946666  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:03.979149  319301 cri.go:96] found id: ""
	I1227 20:12:03.979174  319301 logs.go:282] 0 containers: []
	W1227 20:12:03.979182  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:03.979198  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:03.979210  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:04.005778  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:04.005811  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:04.088126  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:04.088160  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:04.119438  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:04.119469  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:04.190373  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:04.181899    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.182747    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.184416    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.184965    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.186575    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:04.181899    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.182747    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.184416    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.184965    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:04.186575    6027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:04.190394  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:04.190407  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:04.220233  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:04.220259  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:04.245645  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:04.245671  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:04.345961  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:04.345994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:04.365659  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:04.365694  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:04.417757  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:04.417791  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:06.964717  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:06.979395  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:06.979502  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:07.006920  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:07.006954  319301 cri.go:96] found id: ""
	I1227 20:12:07.006964  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:07.007030  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:07.012095  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:07.012233  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:07.041413  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:07.041494  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:07.041512  319301 cri.go:96] found id: ""
	I1227 20:12:07.041520  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:07.041598  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:07.045354  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:07.049177  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:07.049259  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:07.083301  319301 cri.go:96] found id: ""
	I1227 20:12:07.083329  319301 logs.go:282] 0 containers: []
	W1227 20:12:07.083338  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:07.083344  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:07.083421  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:07.115313  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:07.115338  319301 cri.go:96] found id: ""
	I1227 20:12:07.115347  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:07.115417  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:07.119201  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:07.119288  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:07.146102  319301 cri.go:96] found id: ""
	I1227 20:12:07.146131  319301 logs.go:282] 0 containers: []
	W1227 20:12:07.146140  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:07.146147  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:07.146208  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:07.172141  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:07.172172  319301 cri.go:96] found id: ""
	I1227 20:12:07.172180  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:07.172247  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:07.175941  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:07.176014  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:07.201635  319301 cri.go:96] found id: ""
	I1227 20:12:07.201661  319301 logs.go:282] 0 containers: []
	W1227 20:12:07.201682  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:07.201699  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:07.201711  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:07.267041  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:07.258167    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.258717    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.260273    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.260745    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.262196    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:07.258167    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.258717    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.260273    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.260745    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:07.262196    6135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:07.267062  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:07.267076  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:07.299653  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:07.299681  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:07.379741  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:07.379776  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:07.478201  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:07.478238  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:07.496143  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:07.496172  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:07.524943  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:07.524973  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:07.588841  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:07.588883  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:07.639348  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:07.639391  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:07.671575  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:07.671608  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:10.217505  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:10.228493  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:10.228562  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:10.262225  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:10.262248  319301 cri.go:96] found id: ""
	I1227 20:12:10.262256  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:10.262312  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:10.267062  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:10.267197  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:10.296434  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:10.296459  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:10.296464  319301 cri.go:96] found id: ""
	I1227 20:12:10.296472  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:10.296529  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:10.300310  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:10.304957  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:10.305022  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:10.330532  319301 cri.go:96] found id: ""
	I1227 20:12:10.330560  319301 logs.go:282] 0 containers: []
	W1227 20:12:10.330570  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:10.330584  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:10.330646  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:10.361300  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:10.361324  319301 cri.go:96] found id: ""
	I1227 20:12:10.361332  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:10.361394  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:10.365025  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:10.365095  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:10.391129  319301 cri.go:96] found id: ""
	I1227 20:12:10.391150  319301 logs.go:282] 0 containers: []
	W1227 20:12:10.391159  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:10.391165  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:10.391228  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:10.427446  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:10.427467  319301 cri.go:96] found id: ""
	I1227 20:12:10.427475  319301 logs.go:282] 1 containers: [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:10.427530  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:10.431147  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:10.431236  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:10.457621  319301 cri.go:96] found id: ""
	I1227 20:12:10.457645  319301 logs.go:282] 0 containers: []
	W1227 20:12:10.457653  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:10.457669  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:10.457680  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:10.497801  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:10.497832  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:10.533576  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:10.533606  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:10.563063  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:10.563092  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:10.595636  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:10.595663  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:10.707654  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:10.707734  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:10.727626  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:10.727752  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:10.859705  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:10.846588    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.847467    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.853805    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.854122    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.855621    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:10.846588    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.847467    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.853805    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.854122    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:10.855621    6316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:10.859774  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:10.859801  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:10.958101  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:10.958183  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:11.020263  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:11.020358  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:13.639948  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:13.650732  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:13.650797  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:13.676632  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:13.676651  319301 cri.go:96] found id: ""
	I1227 20:12:13.676658  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:13.676710  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.680432  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:13.680542  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:13.711606  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:13.711625  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:13.711630  319301 cri.go:96] found id: ""
	I1227 20:12:13.711637  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:13.711691  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.715265  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.718775  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:13.718931  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:13.746245  319301 cri.go:96] found id: ""
	I1227 20:12:13.746275  319301 logs.go:282] 0 containers: []
	W1227 20:12:13.746291  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:13.746298  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:13.746374  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:13.779388  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:13.779409  319301 cri.go:96] found id: ""
	I1227 20:12:13.779418  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:13.779504  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.783612  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:13.783685  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:13.808842  319301 cri.go:96] found id: ""
	I1227 20:12:13.808863  319301 logs.go:282] 0 containers: []
	W1227 20:12:13.808872  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:13.808878  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:13.808934  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:13.835153  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:13.835174  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:13.835179  319301 cri.go:96] found id: ""
	I1227 20:12:13.835187  319301 logs.go:282] 2 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:13.835249  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.839009  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:13.842805  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:13.842881  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:13.872544  319301 cri.go:96] found id: ""
	I1227 20:12:13.872570  319301 logs.go:282] 0 containers: []
	W1227 20:12:13.872579  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:13.872587  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:13.872599  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:13.898550  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:13.898578  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:13.924170  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:13.924197  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:14.003535  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:14.003571  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:14.105189  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:14.105228  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:14.176586  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:14.168398    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.169127    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.170691    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.171292    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.172935    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:14.168398    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.169127    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.170691    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.171292    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:14.172935    6426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:14.176608  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:14.176622  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:14.204979  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:14.205007  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:14.246862  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:14.246911  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:14.282199  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:14.282225  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:14.315428  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:14.315459  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:14.334814  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:14.334848  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:16.885569  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:16.896097  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:16.896162  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:16.925765  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:16.925785  319301 cri.go:96] found id: ""
	I1227 20:12:16.925794  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:16.925849  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:16.929283  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:16.929349  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:16.954491  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:16.954515  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:16.954520  319301 cri.go:96] found id: ""
	I1227 20:12:16.954528  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:16.954586  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:16.958221  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:16.961382  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:16.961573  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:16.994836  319301 cri.go:96] found id: ""
	I1227 20:12:16.994860  319301 logs.go:282] 0 containers: []
	W1227 20:12:16.994868  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:16.994874  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:16.994933  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:17.021903  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:17.021926  319301 cri.go:96] found id: ""
	I1227 20:12:17.021934  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:17.022017  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:17.025998  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:17.026093  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:17.052024  319301 cri.go:96] found id: ""
	I1227 20:12:17.052049  319301 logs.go:282] 0 containers: []
	W1227 20:12:17.052058  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:17.052083  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:17.052163  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:17.078719  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:17.078740  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:17.078744  319301 cri.go:96] found id: ""
	I1227 20:12:17.078752  319301 logs.go:282] 2 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:17.078826  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:17.082470  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:17.086147  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:17.086220  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:17.116980  319301 cri.go:96] found id: ""
	I1227 20:12:17.117003  319301 logs.go:282] 0 containers: []
	W1227 20:12:17.117013  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:17.117022  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:17.117033  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:17.196379  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:17.196418  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:17.230926  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:17.230959  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:17.250661  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:17.250691  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:17.322817  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:17.314780    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.315442    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.317018    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.317535    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.319106    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:17.314780    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.315442    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.317018    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.317535    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:17.319106    6558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:17.322840  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:17.322856  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:17.351684  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:17.351711  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:17.399098  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:17.399132  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:17.490988  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:17.491023  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:17.556151  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:17.556187  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:17.582835  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:17.582871  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:17.613801  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:17.613837  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:20.145063  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:20.156515  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:20.156583  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:20.187608  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:20.187635  319301 cri.go:96] found id: ""
	I1227 20:12:20.187645  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:20.187707  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.192025  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:20.192105  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:20.224749  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:20.224774  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:20.224780  319301 cri.go:96] found id: ""
	I1227 20:12:20.224788  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:20.224847  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.229081  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.233080  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:20.233183  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:20.265194  319301 cri.go:96] found id: ""
	I1227 20:12:20.265217  319301 logs.go:282] 0 containers: []
	W1227 20:12:20.265226  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:20.265233  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:20.265290  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:20.294941  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:20.294965  319301 cri.go:96] found id: ""
	I1227 20:12:20.294974  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:20.295030  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.299194  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:20.299295  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:20.327103  319301 cri.go:96] found id: ""
	I1227 20:12:20.327127  319301 logs.go:282] 0 containers: []
	W1227 20:12:20.327136  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:20.327142  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:20.327225  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:20.355319  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:20.355340  319301 cri.go:96] found id: "3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:20.355351  319301 cri.go:96] found id: ""
	I1227 20:12:20.355359  319301 logs.go:282] 2 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67]
	I1227 20:12:20.355441  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.359302  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:20.362848  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:20.362949  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:20.393433  319301 cri.go:96] found id: ""
	I1227 20:12:20.393488  319301 logs.go:282] 0 containers: []
	W1227 20:12:20.393498  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:20.393527  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:20.393545  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:20.421493  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:20.421522  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:20.498925  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:20.498966  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:20.519854  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:20.519883  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:20.576881  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:20.576922  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:20.621620  319301 logs.go:123] Gathering logs for kube-controller-manager [3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67] ...
	I1227 20:12:20.621656  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3ebe64b334148fd12dd995c515d126f93aeb94c7e64b0fe2cecf9abea3f93c67"
	I1227 20:12:20.649613  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:20.649648  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:20.685860  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:20.685889  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:20.779036  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:20.779072  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:20.846477  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:20.838325    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.838829    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.840489    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.841069    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.842962    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:20.838325    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.838829    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.840489    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.841069    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:20.842962    6727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:20.846497  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:20.846511  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:20.876493  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:20.876523  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:23.407116  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:23.417842  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:23.417914  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:23.449077  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:23.449100  319301 cri.go:96] found id: ""
	I1227 20:12:23.449108  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:23.449162  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:23.452848  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:23.452918  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:23.481566  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:23.481589  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:23.481595  319301 cri.go:96] found id: ""
	I1227 20:12:23.481602  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:23.481661  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:23.485561  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:23.489363  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:23.489433  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:23.515690  319301 cri.go:96] found id: ""
	I1227 20:12:23.515717  319301 logs.go:282] 0 containers: []
	W1227 20:12:23.515727  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:23.515734  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:23.515796  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:23.542113  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:23.542134  319301 cri.go:96] found id: ""
	I1227 20:12:23.542144  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:23.542198  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:23.546461  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:23.546535  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:23.572051  319301 cri.go:96] found id: ""
	I1227 20:12:23.572080  319301 logs.go:282] 0 containers: []
	W1227 20:12:23.572090  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:23.572096  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:23.572154  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:23.598223  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:23.598246  319301 cri.go:96] found id: ""
	I1227 20:12:23.598254  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:23.598308  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:23.602471  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:23.602548  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:23.632139  319301 cri.go:96] found id: ""
	I1227 20:12:23.632162  319301 logs.go:282] 0 containers: []
	W1227 20:12:23.632171  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:23.632185  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:23.632198  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:23.728534  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:23.728573  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:23.746910  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:23.746937  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:23.790408  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:23.790450  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:23.816648  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:23.816683  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:23.844206  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:23.844234  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:23.922341  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:23.922381  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:23.990219  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:23.981959    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.982768    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.984359    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.984673    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.986151    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:23.981959    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.982768    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.984359    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.984673    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:23.986151    6847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:23.990238  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:23.990252  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:24.021769  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:24.021804  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:24.077552  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:24.077591  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:26.612708  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:26.623326  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:26.623428  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:26.653266  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:26.653289  319301 cri.go:96] found id: ""
	I1227 20:12:26.653298  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:26.653373  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:26.657260  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:26.657353  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:26.683071  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:26.683092  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:26.683098  319301 cri.go:96] found id: ""
	I1227 20:12:26.683105  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:26.683166  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:26.686901  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:26.690560  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:26.690649  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:26.718862  319301 cri.go:96] found id: ""
	I1227 20:12:26.718885  319301 logs.go:282] 0 containers: []
	W1227 20:12:26.718894  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:26.718900  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:26.718959  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:26.747552  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:26.747574  319301 cri.go:96] found id: ""
	I1227 20:12:26.747582  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:26.747637  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:26.751375  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:26.751452  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:26.777853  319301 cri.go:96] found id: ""
	I1227 20:12:26.777880  319301 logs.go:282] 0 containers: []
	W1227 20:12:26.777889  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:26.777895  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:26.777957  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:26.804445  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:26.804468  319301 cri.go:96] found id: ""
	I1227 20:12:26.804477  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:26.804535  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:26.808568  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:26.808691  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:26.836896  319301 cri.go:96] found id: ""
	I1227 20:12:26.836922  319301 logs.go:282] 0 containers: []
	W1227 20:12:26.836932  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:26.836945  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:26.836960  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:26.857005  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:26.857033  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:26.928707  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:26.920823    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.921472    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.923023    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.923492    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.925222    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:26.920823    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.921472    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.923023    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.923492    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:26.925222    6950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:26.928729  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:26.928742  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:26.956493  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:26.956522  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:26.986280  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:26.986306  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:27.076259  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:27.076295  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:27.172547  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:27.172582  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:27.230338  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:27.230374  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:27.276521  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:27.276554  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:27.308603  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:27.308630  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:29.841840  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:29.852151  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:29.852219  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:29.879885  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:29.879922  319301 cri.go:96] found id: ""
	I1227 20:12:29.879931  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:29.880028  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:29.883662  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:29.883731  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:29.912705  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:29.912727  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:29.912733  319301 cri.go:96] found id: ""
	I1227 20:12:29.912740  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:29.912795  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:29.916252  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:29.921161  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:29.921231  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:29.950824  319301 cri.go:96] found id: ""
	I1227 20:12:29.950846  319301 logs.go:282] 0 containers: []
	W1227 20:12:29.950855  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:29.950862  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:29.950917  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:29.986337  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:29.986357  319301 cri.go:96] found id: ""
	I1227 20:12:29.986365  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:29.986420  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:29.990557  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:29.990644  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:30.034984  319301 cri.go:96] found id: ""
	I1227 20:12:30.035016  319301 logs.go:282] 0 containers: []
	W1227 20:12:30.035027  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:30.035034  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:30.035109  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:30.071248  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:30.071274  319301 cri.go:96] found id: ""
	I1227 20:12:30.071284  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:30.071380  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:30.075947  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:30.076061  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:30.105680  319301 cri.go:96] found id: ""
	I1227 20:12:30.105705  319301 logs.go:282] 0 containers: []
	W1227 20:12:30.105715  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:30.105730  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:30.105748  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:30.135961  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:30.135994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:30.216289  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:30.216331  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:30.255913  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:30.255946  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:30.355835  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:30.355870  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:30.429441  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:30.421794    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.422353    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.423860    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.424337    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.426060    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:30.421794    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.422353    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.423860    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.424337    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:30.426060    7097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:30.429483  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:30.429495  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:30.458949  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:30.458978  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:30.502640  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:30.502677  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:30.532992  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:30.533023  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:30.557835  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:30.557866  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:33.116429  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:33.127018  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:33.127132  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:33.153291  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:33.153316  319301 cri.go:96] found id: ""
	I1227 20:12:33.153324  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:33.153379  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:33.157166  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:33.157239  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:33.183179  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:33.183200  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:33.183205  319301 cri.go:96] found id: ""
	I1227 20:12:33.183213  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:33.183265  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:33.186752  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:33.190422  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:33.190494  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:33.220717  319301 cri.go:96] found id: ""
	I1227 20:12:33.220739  319301 logs.go:282] 0 containers: []
	W1227 20:12:33.220748  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:33.220754  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:33.220818  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:33.251060  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:33.251083  319301 cri.go:96] found id: ""
	I1227 20:12:33.251091  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:33.251145  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:33.254679  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:33.254748  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:33.286493  319301 cri.go:96] found id: ""
	I1227 20:12:33.286518  319301 logs.go:282] 0 containers: []
	W1227 20:12:33.286527  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:33.286533  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:33.286620  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:33.313587  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:33.313613  319301 cri.go:96] found id: ""
	I1227 20:12:33.313622  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:33.313680  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:33.317328  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:33.317408  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:33.343846  319301 cri.go:96] found id: ""
	I1227 20:12:33.343871  319301 logs.go:282] 0 containers: []
	W1227 20:12:33.343880  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:33.343893  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:33.343925  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:33.438565  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:33.438603  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:33.457675  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:33.457705  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:33.525788  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:33.517888    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.518628    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.520164    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.520718    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.522282    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:33.517888    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.518628    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.520164    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.520718    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:33.522282    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:33.525811  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:33.525825  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:33.552529  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:33.552556  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:33.580140  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:33.580172  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:33.641393  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:33.641499  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:33.693161  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:33.693199  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:33.724867  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:33.724893  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:33.805497  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:33.805537  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:36.337435  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:36.352136  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:36.352206  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:36.378464  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:36.378486  319301 cri.go:96] found id: ""
	I1227 20:12:36.378494  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:36.378548  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:36.382431  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:36.382500  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:36.408340  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:36.408362  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:36.408367  319301 cri.go:96] found id: ""
	I1227 20:12:36.408375  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:36.408430  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:36.411977  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:36.415450  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:36.415561  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:36.441750  319301 cri.go:96] found id: ""
	I1227 20:12:36.441773  319301 logs.go:282] 0 containers: []
	W1227 20:12:36.441781  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:36.441789  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:36.441849  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:36.469111  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:36.469133  319301 cri.go:96] found id: ""
	I1227 20:12:36.469141  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:36.469193  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:36.472982  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:36.473055  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:36.501345  319301 cri.go:96] found id: ""
	I1227 20:12:36.501368  319301 logs.go:282] 0 containers: []
	W1227 20:12:36.501378  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:36.501384  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:36.501477  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:36.527577  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:36.527600  319301 cri.go:96] found id: ""
	I1227 20:12:36.527608  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:36.527664  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:36.531477  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:36.531552  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:36.561054  319301 cri.go:96] found id: ""
	I1227 20:12:36.561130  319301 logs.go:282] 0 containers: []
	W1227 20:12:36.561154  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:36.561181  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:36.561217  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:36.589983  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:36.590014  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:36.669955  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:36.669994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:36.768958  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:36.768994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:36.787310  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:36.787336  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:36.856793  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:36.848163    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.849099    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.850911    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.851491    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.853132    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:36.848163    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.849099    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.850911    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.851491    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:36.853132    7345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:36.856819  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:36.856834  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:36.909328  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:36.909366  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:36.960708  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:36.960741  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:36.988799  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:36.988826  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:37.020389  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:37.020426  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:39.556036  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:39.567454  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:39.567523  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:39.597767  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:39.597789  319301 cri.go:96] found id: ""
	I1227 20:12:39.597797  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:39.597853  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:39.601347  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:39.601417  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:39.630309  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:39.630330  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:39.630335  319301 cri.go:96] found id: ""
	I1227 20:12:39.630343  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:39.630395  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:39.634109  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:39.637369  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:39.637474  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:39.664492  319301 cri.go:96] found id: ""
	I1227 20:12:39.664515  319301 logs.go:282] 0 containers: []
	W1227 20:12:39.664523  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:39.664536  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:39.664595  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:39.689554  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:39.689585  319301 cri.go:96] found id: ""
	I1227 20:12:39.689594  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:39.689648  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:39.693184  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:39.693251  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:39.719030  319301 cri.go:96] found id: ""
	I1227 20:12:39.719057  319301 logs.go:282] 0 containers: []
	W1227 20:12:39.719066  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:39.719073  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:39.719131  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:39.751945  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:39.751967  319301 cri.go:96] found id: ""
	I1227 20:12:39.751976  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:39.752058  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:39.755910  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:39.755984  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:39.787281  319301 cri.go:96] found id: ""
	I1227 20:12:39.787306  319301 logs.go:282] 0 containers: []
	W1227 20:12:39.787315  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:39.787329  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:39.787341  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:39.818112  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:39.818181  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:39.877195  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:39.877228  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:39.902875  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:39.902908  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:39.933383  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:39.933411  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:39.964696  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:39.964725  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:40.094427  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:40.094546  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:40.115127  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:40.115169  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:40.188369  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:40.178140    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.178935    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.180929    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.181956    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.182727    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:40.178140    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.178935    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.180929    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.181956    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:40.182727    7506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:40.188403  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:40.188417  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:40.248250  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:40.248293  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:42.832956  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:42.843630  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:42.843716  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:42.880632  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:42.880654  319301 cri.go:96] found id: ""
	I1227 20:12:42.880662  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:42.880716  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:42.884197  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:42.884283  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:42.912329  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:42.912351  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:42.912356  319301 cri.go:96] found id: ""
	I1227 20:12:42.912363  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:42.912420  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:42.919733  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:42.924460  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:42.924555  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:42.950089  319301 cri.go:96] found id: ""
	I1227 20:12:42.950112  319301 logs.go:282] 0 containers: []
	W1227 20:12:42.950120  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:42.950126  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:42.950186  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:42.982372  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:42.982393  319301 cri.go:96] found id: ""
	I1227 20:12:42.982400  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:42.982454  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:42.985981  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:42.986048  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:43.025247  319301 cri.go:96] found id: ""
	I1227 20:12:43.025270  319301 logs.go:282] 0 containers: []
	W1227 20:12:43.025279  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:43.025285  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:43.025345  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:43.051039  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:43.051058  319301 cri.go:96] found id: ""
	I1227 20:12:43.051066  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:43.051128  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:43.055686  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:43.055774  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:43.080239  319301 cri.go:96] found id: ""
	I1227 20:12:43.080305  319301 logs.go:282] 0 containers: []
	W1227 20:12:43.080328  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:43.080365  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:43.080392  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:43.117618  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:43.117647  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:43.203203  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:43.203243  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:43.233482  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:43.233514  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:43.331030  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:43.331068  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:43.400596  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:43.391562    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.392218    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.393995    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.395389    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.396936    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:43.391562    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.392218    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.393995    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.395389    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:43.396936    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:43.400620  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:43.400635  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:43.451280  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:43.451316  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:43.469068  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:43.469097  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:43.497581  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:43.497607  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:43.541271  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:43.541307  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:46.066721  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:46.077342  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:46.077418  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:46.106073  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:46.106096  319301 cri.go:96] found id: ""
	I1227 20:12:46.106105  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:46.106161  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:46.110573  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:46.110647  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:46.141403  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:46.141426  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:46.141431  319301 cri.go:96] found id: ""
	I1227 20:12:46.141438  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:46.141524  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:46.146711  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:46.150119  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:46.150207  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:46.177378  319301 cri.go:96] found id: ""
	I1227 20:12:46.177403  319301 logs.go:282] 0 containers: []
	W1227 20:12:46.177411  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:46.177418  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:46.177523  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:46.203465  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:46.203488  319301 cri.go:96] found id: ""
	I1227 20:12:46.203497  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:46.203554  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:46.207163  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:46.207260  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:46.232721  319301 cri.go:96] found id: ""
	I1227 20:12:46.232748  319301 logs.go:282] 0 containers: []
	W1227 20:12:46.232757  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:46.232764  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:46.232849  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:46.260899  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:46.260924  319301 cri.go:96] found id: ""
	I1227 20:12:46.260933  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:46.261004  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:46.264880  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:46.264994  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:46.294702  319301 cri.go:96] found id: ""
	I1227 20:12:46.294772  319301 logs.go:282] 0 containers: []
	W1227 20:12:46.294788  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:46.294802  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:46.294815  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:46.392870  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:46.392907  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:46.411136  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:46.411165  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:46.442076  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:46.442105  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:46.507864  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:46.500419    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.500963    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.502621    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.503081    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.504499    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:46.500419    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.500963    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.502621    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.503081    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:46.504499    7728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:46.507887  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:46.507900  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:46.534504  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:46.534534  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:46.599046  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:46.599082  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:46.644197  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:46.644234  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:46.674716  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:46.674743  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:46.703463  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:46.703492  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:49.285570  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:49.295868  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:49.295960  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:49.323445  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:49.323469  319301 cri.go:96] found id: ""
	I1227 20:12:49.323477  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:49.323567  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:49.327039  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:49.327106  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:49.353757  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:49.353781  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:49.353787  319301 cri.go:96] found id: ""
	I1227 20:12:49.353794  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:49.353854  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:49.360531  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:49.364480  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:49.364568  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:49.392254  319301 cri.go:96] found id: ""
	I1227 20:12:49.392325  319301 logs.go:282] 0 containers: []
	W1227 20:12:49.392349  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:49.392374  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:49.392458  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:49.422197  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:49.422218  319301 cri.go:96] found id: ""
	I1227 20:12:49.422226  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:49.422279  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:49.425742  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:49.425813  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:49.451624  319301 cri.go:96] found id: ""
	I1227 20:12:49.451650  319301 logs.go:282] 0 containers: []
	W1227 20:12:49.451659  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:49.451665  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:49.451725  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:49.477813  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:49.477836  319301 cri.go:96] found id: ""
	I1227 20:12:49.477846  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:49.477911  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:49.481531  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:49.481625  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:49.507374  319301 cri.go:96] found id: ""
	I1227 20:12:49.507400  319301 logs.go:282] 0 containers: []
	W1227 20:12:49.507409  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:49.507425  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:49.507438  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:49.598294  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:49.598336  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:49.636279  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:49.636307  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:49.707651  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:49.707686  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:49.765937  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:49.765972  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:49.783282  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:49.783310  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:49.868264  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:49.856321    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.857001    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.858772    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.863251    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.863608    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:49.856321    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.857001    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.858772    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.863251    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:49.863608    7866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:49.868294  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:49.868307  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:49.894496  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:49.894524  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:49.919827  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:49.919864  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:50.000367  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:50.000443  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:52.556360  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:52.566511  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:52.566580  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:52.593484  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:52.593517  319301 cri.go:96] found id: ""
	I1227 20:12:52.593527  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:52.593640  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:52.597279  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:52.597349  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:52.623469  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:52.623547  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:52.623568  319301 cri.go:96] found id: ""
	I1227 20:12:52.623591  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:52.623659  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:52.627305  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:52.630834  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:52.630949  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:52.657093  319301 cri.go:96] found id: ""
	I1227 20:12:52.657120  319301 logs.go:282] 0 containers: []
	W1227 20:12:52.657130  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:52.657136  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:52.657201  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:52.683396  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:52.683470  319301 cri.go:96] found id: ""
	I1227 20:12:52.683487  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:52.683556  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:52.687311  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:52.687381  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:52.716233  319301 cri.go:96] found id: ""
	I1227 20:12:52.716257  319301 logs.go:282] 0 containers: []
	W1227 20:12:52.716266  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:52.716273  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:52.716333  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:52.742458  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:52.742482  319301 cri.go:96] found id: ""
	I1227 20:12:52.742491  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:52.742547  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:52.746498  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:52.746629  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:52.771746  319301 cri.go:96] found id: ""
	I1227 20:12:52.771772  319301 logs.go:282] 0 containers: []
	W1227 20:12:52.771781  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:52.771820  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:52.771837  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:52.824894  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:52.824929  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:52.854289  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:52.854318  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:52.889855  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:52.889887  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:52.993260  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:52.993294  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:53.038574  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:53.038617  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:53.071005  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:53.071035  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:53.149881  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:53.149919  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:53.167391  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:53.167547  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:53.240789  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:53.230138    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.230860    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.232667    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.233277    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.236557    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:53.230138    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.230860    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.232667    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.233277    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:53.236557    8018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:53.240810  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:53.240823  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:55.779743  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:55.790606  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:55.790677  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:55.817091  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:55.817112  319301 cri.go:96] found id: ""
	I1227 20:12:55.817121  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:55.817176  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:55.820799  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:55.820876  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:55.850874  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:55.850897  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:55.850903  319301 cri.go:96] found id: ""
	I1227 20:12:55.850911  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:55.850964  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:55.854708  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:55.858278  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:55.858347  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:55.887432  319301 cri.go:96] found id: ""
	I1227 20:12:55.887456  319301 logs.go:282] 0 containers: []
	W1227 20:12:55.887465  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:55.887471  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:55.887526  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:55.914817  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:55.914839  319301 cri.go:96] found id: ""
	I1227 20:12:55.914847  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:55.914903  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:55.918494  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:55.918571  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:55.948625  319301 cri.go:96] found id: ""
	I1227 20:12:55.948648  319301 logs.go:282] 0 containers: []
	W1227 20:12:55.948657  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:55.948664  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:55.948733  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:55.984844  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:55.984867  319301 cri.go:96] found id: ""
	I1227 20:12:55.984875  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:55.984930  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:55.988564  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:55.988652  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:56.016926  319301 cri.go:96] found id: ""
	I1227 20:12:56.016956  319301 logs.go:282] 0 containers: []
	W1227 20:12:56.016966  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:56.016982  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:56.016994  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:12:56.118289  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:56.118325  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:56.136502  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:56.136532  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:56.169081  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:56.169108  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:56.211041  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:56.211076  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:56.243209  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:56.243244  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:56.314060  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:56.305651    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.306321    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.307810    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.308362    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.310021    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:56.305651    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.306321    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.307810    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.308362    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:56.310021    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:56.314082  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:56.314098  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:56.377302  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:56.377341  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:56.410912  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:56.410991  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:56.438190  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:56.438218  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:59.018860  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:12:59.029806  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:12:59.029879  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:12:59.058607  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:59.058631  319301 cri.go:96] found id: ""
	I1227 20:12:59.058640  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:12:59.058697  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:59.062467  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:12:59.062544  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:12:59.091353  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:59.091376  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:59.091382  319301 cri.go:96] found id: ""
	I1227 20:12:59.091389  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:12:59.091445  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:59.095198  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:59.100058  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:12:59.100137  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:12:59.126292  319301 cri.go:96] found id: ""
	I1227 20:12:59.126317  319301 logs.go:282] 0 containers: []
	W1227 20:12:59.126326  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:12:59.126333  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:12:59.126397  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:12:59.155155  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:59.155177  319301 cri.go:96] found id: ""
	I1227 20:12:59.155186  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:12:59.155242  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:59.158920  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:12:59.158992  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:12:59.189092  319301 cri.go:96] found id: ""
	I1227 20:12:59.189159  319301 logs.go:282] 0 containers: []
	W1227 20:12:59.189181  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:12:59.189206  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:12:59.189294  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:12:59.216198  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:59.216262  319301 cri.go:96] found id: ""
	I1227 20:12:59.216285  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:12:59.216377  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:12:59.224385  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:12:59.224486  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:12:59.252259  319301 cri.go:96] found id: ""
	I1227 20:12:59.252285  319301 logs.go:282] 0 containers: []
	W1227 20:12:59.252294  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:12:59.252309  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:12:59.252342  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:12:59.273005  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:12:59.273034  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:12:59.301850  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:12:59.301881  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:12:59.356187  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:12:59.356221  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:12:59.399819  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:12:59.399852  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:12:59.433910  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:12:59.433941  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:12:59.513398  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:12:59.513432  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:12:59.549380  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:12:59.549409  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:12:59.623298  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:12:59.615506    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.615904    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.617387    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.618024    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.619495    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:12:59.615506    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.615904    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.617387    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.618024    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:12:59.619495    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:12:59.623322  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:12:59.623336  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:12:59.649178  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:12:59.649207  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:02.243275  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:02.254105  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:02.254177  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:02.286583  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:02.286605  319301 cri.go:96] found id: ""
	I1227 20:13:02.286613  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:02.286669  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:02.290640  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:02.290708  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:02.317723  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:02.317746  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:02.317752  319301 cri.go:96] found id: ""
	I1227 20:13:02.317760  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:02.317817  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:02.322227  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:02.325742  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:02.325814  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:02.352306  319301 cri.go:96] found id: ""
	I1227 20:13:02.352333  319301 logs.go:282] 0 containers: []
	W1227 20:13:02.352342  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:02.352349  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:02.352409  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:02.378873  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:02.378896  319301 cri.go:96] found id: ""
	I1227 20:13:02.378906  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:02.378961  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:02.383556  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:02.383681  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:02.421495  319301 cri.go:96] found id: ""
	I1227 20:13:02.421526  319301 logs.go:282] 0 containers: []
	W1227 20:13:02.421550  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:02.421579  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:02.421661  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:02.454963  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:02.454985  319301 cri.go:96] found id: ""
	I1227 20:13:02.454994  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:02.455071  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:02.458781  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:02.458901  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:02.488822  319301 cri.go:96] found id: ""
	I1227 20:13:02.488848  319301 logs.go:282] 0 containers: []
	W1227 20:13:02.488857  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:02.488872  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:02.488904  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:02.513914  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:02.513945  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:02.543786  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:02.543815  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:02.602843  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:02.602877  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:02.634221  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:02.634257  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:02.736305  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:02.736347  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:02.812827  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:02.803912    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.804866    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.806654    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.807254    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.808858    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:02.803912    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.804866    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.806654    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.807254    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:02.808858    8383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:02.812848  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:02.812861  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:02.870730  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:02.870770  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:02.896826  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:02.896857  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:02.928575  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:02.928604  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:05.512539  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:05.522703  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:05.522777  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:05.549167  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:05.549187  319301 cri.go:96] found id: ""
	I1227 20:13:05.549195  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:05.549252  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:05.553114  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:05.553224  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:05.591305  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:05.591329  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:05.591334  319301 cri.go:96] found id: ""
	I1227 20:13:05.591342  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:05.591399  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:05.595292  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:05.598966  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:05.599090  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:05.626541  319301 cri.go:96] found id: ""
	I1227 20:13:05.626567  319301 logs.go:282] 0 containers: []
	W1227 20:13:05.626576  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:05.626583  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:05.626644  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:05.658675  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:05.658707  319301 cri.go:96] found id: ""
	I1227 20:13:05.658715  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:05.658771  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:05.662500  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:05.662571  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:05.694208  319301 cri.go:96] found id: ""
	I1227 20:13:05.694232  319301 logs.go:282] 0 containers: []
	W1227 20:13:05.694241  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:05.694248  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:05.694310  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:05.721109  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:05.721133  319301 cri.go:96] found id: ""
	I1227 20:13:05.721152  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:05.721212  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:05.724940  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:05.725010  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:05.751566  319301 cri.go:96] found id: ""
	I1227 20:13:05.751594  319301 logs.go:282] 0 containers: []
	W1227 20:13:05.751604  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:05.751643  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:05.751660  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:05.849663  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:05.849750  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:05.868576  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:05.868607  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:05.934428  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:05.925753    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.926400    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.928037    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.928648    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.930245    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:05.925753    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.926400    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.928037    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.928648    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:05.930245    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:05.934452  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:05.934466  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:05.965352  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:05.965378  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:06.020452  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:06.020494  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:06.054720  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:06.054750  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:06.084316  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:06.084346  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:06.166870  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:06.166934  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:06.221058  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:06.221095  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:08.753099  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:08.764525  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:08.764592  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:08.790692  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:08.790714  319301 cri.go:96] found id: ""
	I1227 20:13:08.790725  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:08.790781  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:08.794565  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:08.794679  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:08.820711  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:08.820730  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:08.820734  319301 cri.go:96] found id: ""
	I1227 20:13:08.820741  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:08.820797  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:08.824460  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:08.827902  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:08.827991  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:08.869147  319301 cri.go:96] found id: ""
	I1227 20:13:08.869171  319301 logs.go:282] 0 containers: []
	W1227 20:13:08.869184  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:08.869190  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:08.869273  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:08.897503  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:08.897528  319301 cri.go:96] found id: ""
	I1227 20:13:08.897545  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:08.897605  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:08.902138  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:08.902257  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:08.931144  319301 cri.go:96] found id: ""
	I1227 20:13:08.931168  319301 logs.go:282] 0 containers: []
	W1227 20:13:08.931177  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:08.931183  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:08.931240  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:08.958779  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:08.958802  319301 cri.go:96] found id: ""
	I1227 20:13:08.958810  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:08.958892  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:08.962888  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:08.962966  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:08.991222  319301 cri.go:96] found id: ""
	I1227 20:13:08.991248  319301 logs.go:282] 0 containers: []
	W1227 20:13:08.991257  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:08.991270  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:08.991310  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:09.009225  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:09.009256  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:09.081569  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:09.073722    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.074157    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.075724    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.076257    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.078038    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:09.073722    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.074157    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.075724    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.076257    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:09.078038    8606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:09.081592  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:09.081608  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:09.112754  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:09.112780  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:09.163779  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:09.163815  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:09.189441  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:09.189512  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:09.271488  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:09.271569  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:09.314936  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:09.314962  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:09.413305  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:09.413344  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:09.465609  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:09.465639  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:12.002552  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:12.014182  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:12.014264  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:12.052377  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:12.052400  319301 cri.go:96] found id: ""
	I1227 20:13:12.052409  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:12.052466  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:12.056292  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:12.056394  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:12.085743  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:12.085765  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:12.085770  319301 cri.go:96] found id: ""
	I1227 20:13:12.085778  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:12.085835  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:12.089812  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:12.093801  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:12.093896  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:12.122289  319301 cri.go:96] found id: ""
	I1227 20:13:12.122359  319301 logs.go:282] 0 containers: []
	W1227 20:13:12.122386  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:12.122402  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:12.122476  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:12.149731  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:12.149758  319301 cri.go:96] found id: ""
	I1227 20:13:12.149767  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:12.149823  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:12.153602  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:12.153688  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:12.178711  319301 cri.go:96] found id: ""
	I1227 20:13:12.178786  319301 logs.go:282] 0 containers: []
	W1227 20:13:12.178808  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:12.178832  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:12.178917  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:12.205322  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:12.205350  319301 cri.go:96] found id: ""
	I1227 20:13:12.205360  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:12.205414  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:12.209024  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:12.209091  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:12.234488  319301 cri.go:96] found id: ""
	I1227 20:13:12.234557  319301 logs.go:282] 0 containers: []
	W1227 20:13:12.234582  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:12.234609  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:12.234640  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:12.261610  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:12.261639  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:12.315635  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:12.315673  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:12.376280  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:12.376313  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:12.402133  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:12.402165  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:12.430982  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:12.431051  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:12.512045  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:12.512078  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:12.530685  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:12.530716  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:12.568375  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:12.568405  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:12.668785  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:12.668822  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:12.735523  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:12.727415    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.728180    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.729943    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.730267    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.732211    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:12.727415    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.728180    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.729943    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.730267    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:12.732211    8790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:15.236014  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:15.247391  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:15.247466  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:15.277268  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:15.277342  319301 cri.go:96] found id: ""
	I1227 20:13:15.277365  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:15.277488  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:15.282305  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:15.282373  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:15.312415  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:15.312436  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:15.312441  319301 cri.go:96] found id: ""
	I1227 20:13:15.312449  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:15.312503  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:15.316541  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:15.319901  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:15.319970  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:15.346399  319301 cri.go:96] found id: ""
	I1227 20:13:15.346424  319301 logs.go:282] 0 containers: []
	W1227 20:13:15.346432  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:15.346439  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:15.346496  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:15.373083  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:15.373104  319301 cri.go:96] found id: ""
	I1227 20:13:15.373112  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:15.373165  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:15.376806  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:15.376918  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:15.401683  319301 cri.go:96] found id: ""
	I1227 20:13:15.401708  319301 logs.go:282] 0 containers: []
	W1227 20:13:15.401717  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:15.401725  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:15.401784  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:15.425772  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:15.425796  319301 cri.go:96] found id: ""
	I1227 20:13:15.425804  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:15.425865  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:15.429359  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:15.429426  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:15.457327  319301 cri.go:96] found id: ""
	I1227 20:13:15.457352  319301 logs.go:282] 0 containers: []
	W1227 20:13:15.457361  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:15.457374  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:15.457387  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:15.499826  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:15.499863  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:15.530003  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:15.530040  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:15.557784  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:15.557811  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:15.637950  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:15.637987  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:15.706856  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:15.696364    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.696954    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.699252    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.700375    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.701334    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:15.696364    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.696954    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.699252    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.700375    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:15.701334    8886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:15.706878  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:15.706893  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:15.742198  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:15.742227  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:15.838586  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:15.838624  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:15.857986  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:15.858016  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:15.889281  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:15.889313  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:18.468232  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:18.478612  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:18.478682  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:18.506032  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:18.506056  319301 cri.go:96] found id: ""
	I1227 20:13:18.506064  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:18.506116  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:18.509751  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:18.509832  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:18.537503  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:18.537527  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:18.537533  319301 cri.go:96] found id: ""
	I1227 20:13:18.537541  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:18.537645  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:18.543736  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:18.548696  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:18.548770  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:18.574950  319301 cri.go:96] found id: ""
	I1227 20:13:18.574986  319301 logs.go:282] 0 containers: []
	W1227 20:13:18.574996  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:18.575003  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:18.575063  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:18.603311  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:18.603330  319301 cri.go:96] found id: ""
	I1227 20:13:18.603337  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:18.603391  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:18.607317  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:18.607399  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:18.637190  319301 cri.go:96] found id: ""
	I1227 20:13:18.637214  319301 logs.go:282] 0 containers: []
	W1227 20:13:18.637223  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:18.637230  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:18.637290  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:18.664240  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:18.664260  319301 cri.go:96] found id: ""
	I1227 20:13:18.664268  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:18.664323  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:18.667779  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:18.667845  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:18.694174  319301 cri.go:96] found id: ""
	I1227 20:13:18.694198  319301 logs.go:282] 0 containers: []
	W1227 20:13:18.694208  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:18.694222  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:18.694235  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:18.718997  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:18.719027  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:18.745989  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:18.746067  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:18.822381  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:18.822419  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:18.867357  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:18.867387  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:18.970030  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:18.970069  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:18.991124  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:18.991208  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:19.073512  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:19.064985    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.065841    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.067396    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.067963    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.069601    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:19.064985    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.065841    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.067396    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.067963    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:19.069601    9024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:19.073537  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:19.073559  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:19.102691  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:19.102717  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:19.156409  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:19.156445  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:21.705847  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:21.716387  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:21.716462  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:21.750665  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:21.750735  319301 cri.go:96] found id: ""
	I1227 20:13:21.750770  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:21.750862  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:21.754653  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:21.754723  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:21.779914  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:21.779938  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:21.779944  319301 cri.go:96] found id: ""
	I1227 20:13:21.779952  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:21.780015  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:21.783993  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:21.787625  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:21.787696  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:21.813514  319301 cri.go:96] found id: ""
	I1227 20:13:21.813543  319301 logs.go:282] 0 containers: []
	W1227 20:13:21.813552  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:21.813559  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:21.813629  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:21.844946  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:21.844968  319301 cri.go:96] found id: ""
	I1227 20:13:21.844976  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:21.845035  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:21.848813  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:21.848884  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:21.874101  319301 cri.go:96] found id: ""
	I1227 20:13:21.874174  319301 logs.go:282] 0 containers: []
	W1227 20:13:21.874190  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:21.874197  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:21.874255  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:21.900432  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:21.900455  319301 cri.go:96] found id: ""
	I1227 20:13:21.900463  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:21.900518  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:21.904020  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:21.904092  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:21.931082  319301 cri.go:96] found id: ""
	I1227 20:13:21.931107  319301 logs.go:282] 0 containers: []
	W1227 20:13:21.931116  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:21.931130  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:21.931173  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:21.977536  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:21.977621  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:22.057131  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:22.057167  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:22.162849  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:22.162890  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:22.181044  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:22.181074  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:22.251501  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:22.243628    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.244178    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.245787    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.246465    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.248081    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:22.243628    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.244178    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.245787    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.246465    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:22.248081    9134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:22.251520  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:22.251532  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:22.322039  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:22.322076  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:22.348945  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:22.348981  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:22.376440  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:22.376468  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:22.411192  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:22.411219  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:24.942580  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:24.952758  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:24.952881  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:24.984548  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:24.984572  319301 cri.go:96] found id: ""
	I1227 20:13:24.984580  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:24.984656  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:24.988133  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:24.988203  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:25.026479  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:25.026581  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:25.026603  319301 cri.go:96] found id: ""
	I1227 20:13:25.026645  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:25.026785  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:25.030841  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:25.034716  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:25.034800  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:25.061711  319301 cri.go:96] found id: ""
	I1227 20:13:25.061738  319301 logs.go:282] 0 containers: []
	W1227 20:13:25.061747  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:25.061753  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:25.061810  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:25.089318  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:25.089386  319301 cri.go:96] found id: ""
	I1227 20:13:25.089409  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:25.089517  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:25.093670  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:25.093795  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:25.121407  319301 cri.go:96] found id: ""
	I1227 20:13:25.121525  319301 logs.go:282] 0 containers: []
	W1227 20:13:25.121549  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:25.121569  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:25.121669  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:25.149007  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:25.149080  319301 cri.go:96] found id: ""
	I1227 20:13:25.149103  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:25.149187  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:25.153407  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:25.153596  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:25.179032  319301 cri.go:96] found id: ""
	I1227 20:13:25.179057  319301 logs.go:282] 0 containers: []
	W1227 20:13:25.179066  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:25.179079  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:25.179090  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:25.276200  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:25.276277  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:25.348617  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:25.340243    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.340862    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.343120    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.343588    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.345111    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:25.340243    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.340862    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.343120    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.343588    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:25.345111    9249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:25.348638  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:25.348655  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:25.406272  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:25.406306  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:25.452731  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:25.452768  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:25.480251  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:25.480280  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:25.557948  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:25.557985  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:25.593809  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:25.593838  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:25.615397  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:25.615429  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:25.646218  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:25.646248  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:28.174341  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:28.185173  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:28.185244  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:28.211104  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:28.211127  319301 cri.go:96] found id: ""
	I1227 20:13:28.211136  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:28.211191  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:28.214901  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:28.215009  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:28.246215  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:28.246280  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:28.246301  319301 cri.go:96] found id: ""
	I1227 20:13:28.246324  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:28.246405  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:28.250387  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:28.253817  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:28.253888  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:28.287626  319301 cri.go:96] found id: ""
	I1227 20:13:28.287651  319301 logs.go:282] 0 containers: []
	W1227 20:13:28.287659  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:28.287665  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:28.287725  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:28.316933  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:28.316954  319301 cri.go:96] found id: ""
	I1227 20:13:28.316962  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:28.317018  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:28.320933  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:28.321004  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:28.347084  319301 cri.go:96] found id: ""
	I1227 20:13:28.347112  319301 logs.go:282] 0 containers: []
	W1227 20:13:28.347122  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:28.347128  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:28.347185  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:28.378083  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:28.378106  319301 cri.go:96] found id: ""
	I1227 20:13:28.378115  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:28.378169  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:28.382099  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:28.382172  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:28.409209  319301 cri.go:96] found id: ""
	I1227 20:13:28.409235  319301 logs.go:282] 0 containers: []
	W1227 20:13:28.409244  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:28.409257  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:28.409270  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:28.427091  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:28.427120  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:28.490226  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:28.482506    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.483031    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.484594    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.484922    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.486441    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:28.482506    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.483031    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.484594    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.484922    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:28.486441    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:28.490251  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:28.490265  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:28.531892  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:28.531924  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:28.557604  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:28.557631  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:28.652391  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:28.652428  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:28.680025  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:28.680051  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:28.737147  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:28.737182  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:28.765648  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:28.765682  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:28.843337  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:28.843374  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:31.382818  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:31.393355  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:31.393426  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:31.420305  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:31.420328  319301 cri.go:96] found id: ""
	I1227 20:13:31.420336  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:31.420391  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:31.424001  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:31.424074  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:31.460581  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:31.460615  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:31.460621  319301 cri.go:96] found id: ""
	I1227 20:13:31.460635  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:31.460702  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:31.464544  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:31.468299  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:31.468414  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:31.500491  319301 cri.go:96] found id: ""
	I1227 20:13:31.500517  319301 logs.go:282] 0 containers: []
	W1227 20:13:31.500526  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:31.500533  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:31.500590  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:31.527178  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:31.527203  319301 cri.go:96] found id: ""
	I1227 20:13:31.527211  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:31.527273  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:31.530886  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:31.530980  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:31.558444  319301 cri.go:96] found id: ""
	I1227 20:13:31.558466  319301 logs.go:282] 0 containers: []
	W1227 20:13:31.558475  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:31.558482  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:31.558583  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:31.583987  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:31.584010  319301 cri.go:96] found id: ""
	I1227 20:13:31.584019  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:31.584072  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:31.587656  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:31.587728  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:31.613640  319301 cri.go:96] found id: ""
	I1227 20:13:31.613662  319301 logs.go:282] 0 containers: []
	W1227 20:13:31.613671  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:31.613692  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:31.613708  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:31.642242  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:31.642274  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:31.724401  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:31.724439  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:31.793926  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:31.785945    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.786581    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.788181    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.788659    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.789864    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:31.785945    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.786581    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.788181    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.788659    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:31.789864    9513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:31.793989  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:31.794011  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:31.825164  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:31.825193  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:31.877179  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:31.877211  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:31.912284  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:31.912319  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:32.015514  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:32.015558  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:32.034674  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:32.034705  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:32.099008  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:32.099062  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:34.634778  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:34.656177  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:34.656243  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:34.684782  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:34.684801  319301 cri.go:96] found id: ""
	I1227 20:13:34.684810  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:34.684865  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:34.688514  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:34.688585  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:34.712895  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:34.712915  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:34.712921  319301 cri.go:96] found id: ""
	I1227 20:13:34.712928  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:34.712995  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:34.716706  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:34.720270  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:34.720346  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:34.746430  319301 cri.go:96] found id: ""
	I1227 20:13:34.746456  319301 logs.go:282] 0 containers: []
	W1227 20:13:34.746465  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:34.746472  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:34.746530  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:34.773423  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:34.773481  319301 cri.go:96] found id: ""
	I1227 20:13:34.773490  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:34.773560  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:34.777238  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:34.777325  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:34.804429  319301 cri.go:96] found id: ""
	I1227 20:13:34.804455  319301 logs.go:282] 0 containers: []
	W1227 20:13:34.804464  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:34.804471  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:34.804528  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:34.837390  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:34.837412  319301 cri.go:96] found id: ""
	I1227 20:13:34.837421  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:34.837518  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:34.841292  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:34.841362  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:34.882512  319301 cri.go:96] found id: ""
	I1227 20:13:34.882537  319301 logs.go:282] 0 containers: []
	W1227 20:13:34.882547  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:34.882561  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:34.882593  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:34.935722  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:34.935778  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:34.963786  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:34.963815  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:35.068786  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:35.068824  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:35.118359  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:35.118402  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:35.146117  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:35.146144  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:35.223101  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:35.223145  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:35.255059  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:35.255089  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:35.276475  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:35.276510  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:35.351174  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:35.342460    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.343305    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.344856    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.345617    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.347573    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:35.342460    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.343305    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.344856    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.345617    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:35.347573    9682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:35.351239  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:35.351268  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:37.881796  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:37.894482  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:37.894556  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:37.924732  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:37.924756  319301 cri.go:96] found id: ""
	I1227 20:13:37.924765  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:37.924821  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:37.928636  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:37.928711  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:37.956752  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:37.956775  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:37.956781  319301 cri.go:96] found id: ""
	I1227 20:13:37.956801  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:37.956860  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:37.960536  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:37.964778  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:37.964879  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:37.998167  319301 cri.go:96] found id: ""
	I1227 20:13:37.998192  319301 logs.go:282] 0 containers: []
	W1227 20:13:37.998202  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:37.998208  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:37.998268  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:38.027828  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:38.027903  319301 cri.go:96] found id: ""
	I1227 20:13:38.027928  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:38.028019  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:38.032285  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:38.032374  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:38.063193  319301 cri.go:96] found id: ""
	I1227 20:13:38.063219  319301 logs.go:282] 0 containers: []
	W1227 20:13:38.063238  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:38.063277  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:38.063338  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:38.100160  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:38.100184  319301 cri.go:96] found id: ""
	I1227 20:13:38.100192  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:38.100248  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:38.104272  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:38.104360  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:38.132286  319301 cri.go:96] found id: ""
	I1227 20:13:38.132319  319301 logs.go:282] 0 containers: []
	W1227 20:13:38.132329  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:38.132343  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:38.132355  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:38.163697  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:38.163723  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:38.181632  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:38.181662  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:38.210225  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:38.210258  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:38.255805  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:38.255842  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:38.358465  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:38.358500  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:38.425713  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:38.417673    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.418194    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.420263    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.420756    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.422182    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:38.417673    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.418194    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.420263    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.420756    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:38.422182    9792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:38.425743  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:38.425766  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:38.481423  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:38.481466  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:38.506752  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:38.506783  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:38.536076  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:38.536104  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:41.112032  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:41.122203  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:41.122272  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:41.147769  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:41.147833  319301 cri.go:96] found id: ""
	I1227 20:13:41.147858  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:41.147945  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:41.151581  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:41.151651  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:41.176060  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:41.176078  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:41.176082  319301 cri.go:96] found id: ""
	I1227 20:13:41.176090  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:41.176144  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:41.179877  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:41.183247  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:41.183311  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:41.212692  319301 cri.go:96] found id: ""
	I1227 20:13:41.212717  319301 logs.go:282] 0 containers: []
	W1227 20:13:41.212727  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:41.212733  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:41.212814  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:41.237313  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:41.237335  319301 cri.go:96] found id: ""
	I1227 20:13:41.237343  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:41.237429  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:41.241432  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:41.241552  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:41.274168  319301 cri.go:96] found id: ""
	I1227 20:13:41.274196  319301 logs.go:282] 0 containers: []
	W1227 20:13:41.274206  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:41.274212  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:41.274295  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:41.300597  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:41.300620  319301 cri.go:96] found id: ""
	I1227 20:13:41.300628  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:41.300702  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:41.304360  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:41.304466  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:41.330795  319301 cri.go:96] found id: ""
	I1227 20:13:41.330819  319301 logs.go:282] 0 containers: []
	W1227 20:13:41.330828  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:41.330860  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:41.330885  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:41.358931  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:41.358960  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:41.383514  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:41.383539  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:41.469734  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:41.469771  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:41.573372  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:41.573411  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:41.591886  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:41.591916  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:41.674483  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:41.665884    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.666635    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.667427    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.669130    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.669864    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:41.665884    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.666635    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.667427    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.669130    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:41.669864    9910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:41.674507  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:41.674521  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:41.756704  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:41.756741  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:41.803676  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:41.803709  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:41.838752  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:41.838785  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:44.371993  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:44.382732  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:44.382811  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:44.408302  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:44.408324  319301 cri.go:96] found id: ""
	I1227 20:13:44.408332  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:44.408387  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:44.411908  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:44.411977  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:44.438505  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:44.438537  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:44.438543  319301 cri.go:96] found id: ""
	I1227 20:13:44.438551  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:44.438612  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:44.443020  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:44.446843  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:44.446907  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:44.473249  319301 cri.go:96] found id: ""
	I1227 20:13:44.473273  319301 logs.go:282] 0 containers: []
	W1227 20:13:44.473282  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:44.473288  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:44.473344  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:44.506635  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:44.506657  319301 cri.go:96] found id: ""
	I1227 20:13:44.506665  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:44.506719  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:44.510255  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:44.510327  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:44.535681  319301 cri.go:96] found id: ""
	I1227 20:13:44.535706  319301 logs.go:282] 0 containers: []
	W1227 20:13:44.535715  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:44.535722  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:44.535779  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:44.566431  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:44.566454  319301 cri.go:96] found id: ""
	I1227 20:13:44.566463  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:44.566544  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:44.570308  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:44.570429  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:44.596900  319301 cri.go:96] found id: ""
	I1227 20:13:44.596925  319301 logs.go:282] 0 containers: []
	W1227 20:13:44.596935  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:44.596969  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:44.596988  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:44.641306  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:44.641338  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:44.670860  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:44.670887  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:44.698228  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:44.698303  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:44.781609  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:44.781645  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:44.832828  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:44.832857  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:44.851403  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:44.851434  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:44.883766  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:44.883796  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:44.982715  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:44.982754  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:45.102278  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:45.090748   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.091715   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.092803   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.093981   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.094942   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:45.090748   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.091715   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.092803   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.093981   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:45.094942   10066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:45.102308  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:45.102333  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:47.711741  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:47.722289  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:47.722355  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:47.752456  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:47.752475  319301 cri.go:96] found id: ""
	I1227 20:13:47.752483  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:47.752545  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:47.756223  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:47.756290  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:47.781994  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:47.782016  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:47.782021  319301 cri.go:96] found id: ""
	I1227 20:13:47.782029  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:47.782082  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:47.785803  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:47.789134  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:47.789202  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:47.819133  319301 cri.go:96] found id: ""
	I1227 20:13:47.819166  319301 logs.go:282] 0 containers: []
	W1227 20:13:47.819176  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:47.819188  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:47.819261  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:47.848513  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:47.848534  319301 cri.go:96] found id: ""
	I1227 20:13:47.848542  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:47.848602  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:47.852477  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:47.852545  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:47.879163  319301 cri.go:96] found id: ""
	I1227 20:13:47.879188  319301 logs.go:282] 0 containers: []
	W1227 20:13:47.879198  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:47.879204  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:47.879288  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:47.906400  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:47.906422  319301 cri.go:96] found id: ""
	I1227 20:13:47.906430  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:47.906487  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:47.910061  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:47.910142  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:47.936751  319301 cri.go:96] found id: ""
	I1227 20:13:47.936822  319301 logs.go:282] 0 containers: []
	W1227 20:13:47.936855  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:47.936885  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:47.936928  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:48.041904  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:48.041941  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:48.059753  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:48.059783  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:48.091794  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:48.091825  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:48.119314  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:48.119341  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:48.167631  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:48.167656  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:48.236954  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:48.226933   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.228070   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.229057   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.230849   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.231433   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:48.226933   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.228070   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.229057   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.230849   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:48.231433   10177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:48.236978  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:48.236992  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:48.266604  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:48.266634  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:48.326691  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:48.326727  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:48.370030  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:48.370062  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:50.950604  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:50.960973  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:50.961044  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:50.989711  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:50.989734  319301 cri.go:96] found id: ""
	I1227 20:13:50.989743  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:50.989813  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:50.993765  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:50.993882  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:51.024930  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:51.024955  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:51.024976  319301 cri.go:96] found id: ""
	I1227 20:13:51.025000  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:51.025060  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:51.029133  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:51.034041  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:51.034136  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:51.061567  319301 cri.go:96] found id: ""
	I1227 20:13:51.061590  319301 logs.go:282] 0 containers: []
	W1227 20:13:51.061599  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:51.061608  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:51.061673  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:51.090737  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:51.090764  319301 cri.go:96] found id: ""
	I1227 20:13:51.090773  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:51.090847  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:51.095345  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:51.095432  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:51.123208  319301 cri.go:96] found id: ""
	I1227 20:13:51.123244  319301 logs.go:282] 0 containers: []
	W1227 20:13:51.123254  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:51.123260  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:51.123334  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:51.154295  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:51.154317  319301 cri.go:96] found id: ""
	I1227 20:13:51.154325  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:51.154407  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:51.158410  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:51.158485  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:51.189846  319301 cri.go:96] found id: ""
	I1227 20:13:51.189882  319301 logs.go:282] 0 containers: []
	W1227 20:13:51.189896  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:51.189909  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:51.189921  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:51.286819  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:51.286858  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:51.305366  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:51.305393  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:51.380305  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:51.380343  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:51.441677  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:51.441710  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:51.481914  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:51.481949  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:51.547090  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:51.539048   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.539678   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.541335   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.541928   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.543466   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:51.539048   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.539678   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.541335   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.541928   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:51.543466   10302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:51.547154  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:51.547176  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:51.578696  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:51.578725  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:51.608004  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:51.608032  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:51.636360  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:51.636391  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:54.212415  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:54.222852  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:54.222923  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:54.251561  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:54.251580  319301 cri.go:96] found id: ""
	I1227 20:13:54.251587  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:54.251645  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:54.255279  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:54.255354  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:54.292682  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:54.292706  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:54.292711  319301 cri.go:96] found id: ""
	I1227 20:13:54.292719  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:54.292781  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:54.296595  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:54.300085  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:54.300159  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:54.326489  319301 cri.go:96] found id: ""
	I1227 20:13:54.326555  319301 logs.go:282] 0 containers: []
	W1227 20:13:54.326579  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:54.326605  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:54.326696  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:54.353313  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:54.353338  319301 cri.go:96] found id: ""
	I1227 20:13:54.353347  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:54.353403  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:54.356927  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:54.356999  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:54.381581  319301 cri.go:96] found id: ""
	I1227 20:13:54.381617  319301 logs.go:282] 0 containers: []
	W1227 20:13:54.381626  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:54.381633  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:54.381691  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:54.414363  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:54.414383  319301 cri.go:96] found id: ""
	I1227 20:13:54.414391  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:54.414446  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:54.418045  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:54.418114  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:54.449206  319301 cri.go:96] found id: ""
	I1227 20:13:54.449229  319301 logs.go:282] 0 containers: []
	W1227 20:13:54.449238  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:54.449252  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:54.449264  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:54.517227  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:54.508584   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.509203   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.510795   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.511388   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.512826   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:54.508584   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.509203   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.510795   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.511388   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:54.512826   10395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:54.517253  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:54.517266  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:54.544360  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:54.544391  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:54.599513  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:54.599547  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:13:54.644818  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:54.644847  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:54.688568  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:54.688609  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:54.713724  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:54.713751  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:54.741842  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:54.741868  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:54.820175  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:54.820209  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:54.925045  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:54.925099  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:57.443738  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:13:57.454148  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:13:57.454219  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:13:57.484004  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:57.484071  319301 cri.go:96] found id: ""
	I1227 20:13:57.484087  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:13:57.484154  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:57.487937  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:13:57.488009  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:13:57.513954  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:57.513978  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:57.513983  319301 cri.go:96] found id: ""
	I1227 20:13:57.513991  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:13:57.514048  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:57.517734  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:57.521248  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:13:57.521322  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:13:57.548709  319301 cri.go:96] found id: ""
	I1227 20:13:57.548734  319301 logs.go:282] 0 containers: []
	W1227 20:13:57.548743  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:13:57.548749  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:13:57.548807  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:13:57.574830  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:57.574853  319301 cri.go:96] found id: ""
	I1227 20:13:57.574862  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:13:57.574919  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:57.578643  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:13:57.578770  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:13:57.604928  319301 cri.go:96] found id: ""
	I1227 20:13:57.604952  319301 logs.go:282] 0 containers: []
	W1227 20:13:57.604961  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:13:57.604967  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:13:57.605037  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:13:57.636096  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:57.636118  319301 cri.go:96] found id: ""
	I1227 20:13:57.636126  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:13:57.636181  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:13:57.640206  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:13:57.640289  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:13:57.664867  319301 cri.go:96] found id: ""
	I1227 20:13:57.664893  319301 logs.go:282] 0 containers: []
	W1227 20:13:57.664903  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:13:57.664918  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:13:57.664930  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:13:57.760571  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:13:57.760614  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:13:57.779034  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:13:57.779063  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:13:57.860979  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:13:57.853801   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.854291   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.855825   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.856219   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.857717   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:13:57.853801   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.854291   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.855825   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.856219   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:13:57.857717   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:13:57.861005  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:13:57.861030  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:13:57.891248  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:13:57.891279  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:13:57.951146  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:13:57.951184  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:13:57.983957  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:13:57.983983  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:13:58.027711  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:13:58.027751  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:13:58.057942  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:13:58.057967  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:13:58.134700  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:13:58.134737  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:00.665876  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:00.676353  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:00.676426  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:00.704251  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:00.704274  319301 cri.go:96] found id: ""
	I1227 20:14:00.704284  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:00.704369  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:00.708101  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:00.708172  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:00.744575  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:00.744598  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:00.744602  319301 cri.go:96] found id: ""
	I1227 20:14:00.744610  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:00.744681  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:00.748672  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:00.752393  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:00.752495  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:00.778438  319301 cri.go:96] found id: ""
	I1227 20:14:00.778463  319301 logs.go:282] 0 containers: []
	W1227 20:14:00.778472  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:00.778478  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:00.778568  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:00.804119  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:00.804143  319301 cri.go:96] found id: ""
	I1227 20:14:00.804152  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:00.804243  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:00.807914  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:00.808018  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:00.837548  319301 cri.go:96] found id: ""
	I1227 20:14:00.837626  319301 logs.go:282] 0 containers: []
	W1227 20:14:00.837640  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:00.837648  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:00.837723  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:00.864504  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:00.864527  319301 cri.go:96] found id: ""
	I1227 20:14:00.864535  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:00.864590  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:00.868408  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:00.868482  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:00.897150  319301 cri.go:96] found id: ""
	I1227 20:14:00.897173  319301 logs.go:282] 0 containers: []
	W1227 20:14:00.897182  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:00.897197  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:00.897210  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:00.998644  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:00.998688  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:01.021375  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:01.021415  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:01.054456  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:01.054487  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:01.115661  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:01.115700  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:01.161388  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:01.161423  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:01.192518  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:01.192549  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:01.275490  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:01.275523  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:01.341916  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:01.334014   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.334408   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.335994   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.336428   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.337960   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:01.334014   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.334408   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.335994   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.336428   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:01.337960   10692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:01.341937  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:01.341950  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:01.368174  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:01.368205  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:03.909559  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:03.920151  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:03.920223  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:03.950304  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:03.950321  319301 cri.go:96] found id: ""
	I1227 20:14:03.950329  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:03.950383  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:03.954284  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:03.954356  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:03.991836  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:03.991917  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:03.991937  319301 cri.go:96] found id: ""
	I1227 20:14:03.991960  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:03.992044  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:03.996532  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:04.000198  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:04.000315  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:04.031549  319301 cri.go:96] found id: ""
	I1227 20:14:04.031622  319301 logs.go:282] 0 containers: []
	W1227 20:14:04.031647  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:04.031671  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:04.031765  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:04.060260  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:04.060328  319301 cri.go:96] found id: ""
	I1227 20:14:04.060356  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:04.060444  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:04.064496  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:04.064588  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:04.102911  319301 cri.go:96] found id: ""
	I1227 20:14:04.103013  319301 logs.go:282] 0 containers: []
	W1227 20:14:04.103124  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:04.103169  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:04.103319  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:04.131147  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:04.131212  319301 cri.go:96] found id: ""
	I1227 20:14:04.131234  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:04.131327  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:04.135698  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:04.135819  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:04.164124  319301 cri.go:96] found id: ""
	I1227 20:14:04.164202  319301 logs.go:282] 0 containers: []
	W1227 20:14:04.164224  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:04.164266  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:04.164297  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:04.182491  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:04.182521  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:04.211036  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:04.211068  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:04.256784  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:04.256821  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:04.348299  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:04.348336  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:04.450573  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:04.450613  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:04.516283  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:04.506999   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.507835   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.510141   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.510856   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.512527   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:04.506999   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.507835   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.510141   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.510856   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:04.512527   10804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:04.516305  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:04.516319  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:04.576841  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:04.576872  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:04.614008  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:04.614035  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:04.641690  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:04.641719  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:07.176073  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:07.186712  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:07.186783  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:07.211686  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:07.211709  319301 cri.go:96] found id: ""
	I1227 20:14:07.211718  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:07.211775  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:07.215681  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:07.215756  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:07.240540  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:07.240563  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:07.240569  319301 cri.go:96] found id: ""
	I1227 20:14:07.240577  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:07.240630  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:07.245279  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:07.249179  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:07.249250  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:07.276774  319301 cri.go:96] found id: ""
	I1227 20:14:07.276800  319301 logs.go:282] 0 containers: []
	W1227 20:14:07.276810  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:07.276816  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:07.276873  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:07.304802  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:07.304821  319301 cri.go:96] found id: ""
	I1227 20:14:07.304829  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:07.304883  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:07.308534  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:07.308604  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:07.336318  319301 cri.go:96] found id: ""
	I1227 20:14:07.336344  319301 logs.go:282] 0 containers: []
	W1227 20:14:07.336354  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:07.336360  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:07.336423  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:07.362751  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:07.362771  319301 cri.go:96] found id: ""
	I1227 20:14:07.362780  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:07.362840  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:07.366846  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:07.366918  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:07.395130  319301 cri.go:96] found id: ""
	I1227 20:14:07.395152  319301 logs.go:282] 0 containers: []
	W1227 20:14:07.395161  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:07.395175  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:07.395187  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:07.491440  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:07.491518  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:07.527740  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:07.527770  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:07.558436  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:07.558464  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:07.588229  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:07.588259  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:07.607165  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:07.607197  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:07.677755  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:07.668928   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.669821   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.671526   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.672177   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.673864   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:07.668928   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.669821   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.671526   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.672177   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:07.673864   10940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:07.677777  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:07.677791  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:07.739114  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:07.739152  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:07.784369  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:07.784406  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:07.810544  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:07.810571  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:10.388063  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:10.398699  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:10.398769  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:10.429540  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:10.429607  319301 cri.go:96] found id: ""
	I1227 20:14:10.429631  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:10.429721  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:10.433534  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:10.433651  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:10.459275  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:10.459297  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:10.459303  319301 cri.go:96] found id: ""
	I1227 20:14:10.459310  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:10.459366  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:10.463124  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:10.466705  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:10.466798  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:10.492126  319301 cri.go:96] found id: ""
	I1227 20:14:10.492155  319301 logs.go:282] 0 containers: []
	W1227 20:14:10.492173  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:10.492184  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:10.492242  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:10.518226  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:10.518248  319301 cri.go:96] found id: ""
	I1227 20:14:10.518256  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:10.518364  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:10.522989  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:10.523096  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:10.549695  319301 cri.go:96] found id: ""
	I1227 20:14:10.549722  319301 logs.go:282] 0 containers: []
	W1227 20:14:10.549732  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:10.549738  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:10.549798  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:10.579366  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:10.579390  319301 cri.go:96] found id: ""
	I1227 20:14:10.579398  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:10.579455  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:10.583638  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:10.583714  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:10.615082  319301 cri.go:96] found id: ""
	I1227 20:14:10.615105  319301 logs.go:282] 0 containers: []
	W1227 20:14:10.615113  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:10.615130  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:10.615142  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:10.683394  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:10.674472   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.675801   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.676387   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.678136   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.678634   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:10.674472   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.675801   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.676387   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.678136   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:10.678634   11038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:10.683412  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:10.683425  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:10.727898  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:10.727931  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:10.753009  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:10.753042  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:10.782677  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:10.782703  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:10.866110  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:10.866147  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:10.959413  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:10.959452  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:10.977909  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:10.977941  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:11.005943  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:11.005969  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:11.074309  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:11.074346  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:13.614417  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:13.625578  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:13.625646  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:13.652507  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:13.652525  319301 cri.go:96] found id: ""
	I1227 20:14:13.652534  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:13.652588  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:13.656545  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:13.656609  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:13.683073  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:13.683097  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:13.683102  319301 cri.go:96] found id: ""
	I1227 20:14:13.683110  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:13.683166  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:13.686968  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:13.690405  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:13.690466  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:13.717840  319301 cri.go:96] found id: ""
	I1227 20:14:13.717864  319301 logs.go:282] 0 containers: []
	W1227 20:14:13.717873  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:13.717879  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:13.717938  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:13.746028  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:13.746049  319301 cri.go:96] found id: ""
	I1227 20:14:13.746058  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:13.746117  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:13.749660  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:13.749741  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:13.775234  319301 cri.go:96] found id: ""
	I1227 20:14:13.775301  319301 logs.go:282] 0 containers: []
	W1227 20:14:13.775322  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:13.775330  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:13.775388  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:13.800618  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:13.800642  319301 cri.go:96] found id: ""
	I1227 20:14:13.800650  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:13.800708  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:13.804545  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:13.804619  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:13.832761  319301 cri.go:96] found id: ""
	I1227 20:14:13.832786  319301 logs.go:282] 0 containers: []
	W1227 20:14:13.832795  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:13.832811  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:13.832824  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:13.851133  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:13.851163  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:13.926603  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:13.926681  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:13.961517  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:13.961544  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:14.069694  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:14.069739  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:14.151483  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:14.142577   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.143391   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.145037   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.145551   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.147508   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:14.142577   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.143391   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.145037   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.145551   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:14.147508   11187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:14.151505  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:14.151520  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:14.181727  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:14.181758  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:14.240301  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:14.240339  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:14.300709  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:14.300743  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:14.336466  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:14.336498  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:16.865634  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:16.876358  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:16.876432  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:16.904188  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:16.904253  319301 cri.go:96] found id: ""
	I1227 20:14:16.904276  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:16.904367  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:16.908220  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:16.908322  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:16.937896  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:16.937919  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:16.937924  319301 cri.go:96] found id: ""
	I1227 20:14:16.937932  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:16.937986  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:16.942670  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:16.946301  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:16.946387  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:16.985586  319301 cri.go:96] found id: ""
	I1227 20:14:16.985609  319301 logs.go:282] 0 containers: []
	W1227 20:14:16.985618  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:16.985624  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:16.985683  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:17.013996  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:17.014029  319301 cri.go:96] found id: ""
	I1227 20:14:17.014039  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:17.014137  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:17.018935  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:17.019008  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:17.052484  319301 cri.go:96] found id: ""
	I1227 20:14:17.052561  319301 logs.go:282] 0 containers: []
	W1227 20:14:17.052583  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:17.052604  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:17.052695  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:17.081622  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:17.081695  319301 cri.go:96] found id: ""
	I1227 20:14:17.081718  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:17.081788  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:17.085690  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:17.085794  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:17.112049  319301 cri.go:96] found id: ""
	I1227 20:14:17.112074  319301 logs.go:282] 0 containers: []
	W1227 20:14:17.112082  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:17.112098  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:17.112141  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:17.137714  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:17.137743  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:17.213490  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:17.213533  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:17.246326  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:17.246356  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:17.328320  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:17.320845   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.321569   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.322897   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.323352   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.324795   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:17.320845   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.321569   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.322897   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.323352   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:17.324795   11316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:17.328340  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:17.328353  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:17.385541  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:17.385578  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:17.427419  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:17.427449  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:17.452174  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:17.452206  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:17.546685  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:17.546724  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:17.565295  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:17.565332  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:20.098978  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:20.111051  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:20.111126  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:20.137851  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:20.137927  319301 cri.go:96] found id: ""
	I1227 20:14:20.137963  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:20.138055  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:20.142900  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:20.143001  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:20.170010  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:20.170087  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:20.170109  319301 cri.go:96] found id: ""
	I1227 20:14:20.170137  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:20.170221  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:20.175063  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:20.178747  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:20.178824  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:20.206381  319301 cri.go:96] found id: ""
	I1227 20:14:20.206409  319301 logs.go:282] 0 containers: []
	W1227 20:14:20.206418  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:20.206425  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:20.206485  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:20.233473  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:20.233499  319301 cri.go:96] found id: ""
	I1227 20:14:20.233508  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:20.233571  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:20.237997  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:20.238070  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:20.262995  319301 cri.go:96] found id: ""
	I1227 20:14:20.263067  319301 logs.go:282] 0 containers: []
	W1227 20:14:20.263092  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:20.263099  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:20.263170  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:20.288462  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:20.288537  319301 cri.go:96] found id: ""
	I1227 20:14:20.288566  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:20.288647  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:20.292436  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:20.292550  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:20.322573  319301 cri.go:96] found id: ""
	I1227 20:14:20.322596  319301 logs.go:282] 0 containers: []
	W1227 20:14:20.322605  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:20.322621  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:20.322633  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:20.432211  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:20.432245  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:20.496754  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:20.496791  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:20.540278  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:20.540351  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:20.567122  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:20.567152  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:20.648855  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:20.648895  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:20.667153  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:20.667185  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:20.736076  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:20.727815   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.728362   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.730119   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.730829   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.732497   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:20.727815   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.728362   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.730119   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.730829   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:20.732497   11461 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:20.736098  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:20.736112  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:20.762277  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:20.762304  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:20.800871  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:20.800901  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:23.331772  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:23.342153  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:23.342227  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:23.367402  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:23.367424  319301 cri.go:96] found id: ""
	I1227 20:14:23.367433  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:23.367489  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:23.371067  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:23.371137  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:23.397005  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:23.397081  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:23.397101  319301 cri.go:96] found id: ""
	I1227 20:14:23.397127  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:23.397212  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:23.401002  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:23.404386  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:23.404490  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:23.430285  319301 cri.go:96] found id: ""
	I1227 20:14:23.430309  319301 logs.go:282] 0 containers: []
	W1227 20:14:23.430318  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:23.430326  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:23.430383  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:23.461494  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:23.461517  319301 cri.go:96] found id: ""
	I1227 20:14:23.461526  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:23.461578  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:23.465337  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:23.465409  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:23.496783  319301 cri.go:96] found id: ""
	I1227 20:14:23.496808  319301 logs.go:282] 0 containers: []
	W1227 20:14:23.496818  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:23.496824  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:23.496881  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:23.522580  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:23.522602  319301 cri.go:96] found id: ""
	I1227 20:14:23.522610  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:23.522665  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:23.526436  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:23.526519  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:23.557267  319301 cri.go:96] found id: ""
	I1227 20:14:23.557299  319301 logs.go:282] 0 containers: []
	W1227 20:14:23.557309  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:23.557325  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:23.557336  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:23.584981  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:23.585010  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:23.648213  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:23.648252  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:23.695771  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:23.695847  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:23.726135  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:23.726165  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:23.810400  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:23.810440  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:23.916410  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:23.916451  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:23.945753  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:23.945825  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:23.996874  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:23.996903  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:24.015806  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:24.015853  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:24.093634  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:24.083702   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.084655   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.086499   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.086863   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.088426   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:24.083702   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.084655   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.086499   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.086863   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:24.088426   11619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:26.595192  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:26.607312  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:26.607388  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:26.644526  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:26.644546  319301 cri.go:96] found id: ""
	I1227 20:14:26.644554  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:26.644613  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:26.648515  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:26.648588  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:26.674360  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:26.674383  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:26.674387  319301 cri.go:96] found id: ""
	I1227 20:14:26.674395  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:26.674451  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:26.678114  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:26.681548  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:26.681619  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:26.707823  319301 cri.go:96] found id: ""
	I1227 20:14:26.707847  319301 logs.go:282] 0 containers: []
	W1227 20:14:26.707856  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:26.707863  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:26.707918  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:26.736808  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:26.736830  319301 cri.go:96] found id: ""
	I1227 20:14:26.736839  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:26.736910  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:26.740449  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:26.740516  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:26.767979  319301 cri.go:96] found id: ""
	I1227 20:14:26.768005  319301 logs.go:282] 0 containers: []
	W1227 20:14:26.768014  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:26.768020  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:26.768093  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:26.794399  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:26.794419  319301 cri.go:96] found id: ""
	I1227 20:14:26.794428  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:26.794482  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:26.798158  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:26.798242  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:26.822859  319301 cri.go:96] found id: ""
	I1227 20:14:26.822883  319301 logs.go:282] 0 containers: []
	W1227 20:14:26.822893  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:26.822924  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:26.822946  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:26.868214  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:26.868238  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:26.932994  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:26.933029  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:26.977303  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:26.977340  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:27.068000  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:27.068040  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:27.171536  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:27.171574  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:27.190535  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:27.190562  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:27.216736  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:27.216762  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:27.243411  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:27.243439  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:27.295099  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:27.295126  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:27.357878  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:27.350559   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.350955   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.352482   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.352824   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.354320   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:27.350559   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.350955   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.352482   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.352824   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:27.354320   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:29.858681  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:29.868776  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:29.868844  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:29.896575  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:29.896597  319301 cri.go:96] found id: ""
	I1227 20:14:29.896605  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:29.896686  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:29.900141  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:29.900230  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:29.933885  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:29.933909  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:29.933915  319301 cri.go:96] found id: ""
	I1227 20:14:29.933922  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:29.933995  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:29.937419  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:29.940597  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:29.940661  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:29.985795  319301 cri.go:96] found id: ""
	I1227 20:14:29.985826  319301 logs.go:282] 0 containers: []
	W1227 20:14:29.985836  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:29.985843  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:29.985919  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:30.025679  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:30.025700  319301 cri.go:96] found id: ""
	I1227 20:14:30.025709  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:30.025777  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:30.049697  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:30.049787  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:30.082890  319301 cri.go:96] found id: ""
	I1227 20:14:30.082916  319301 logs.go:282] 0 containers: []
	W1227 20:14:30.082926  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:30.082934  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:30.083006  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:30.119124  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:30.119148  319301 cri.go:96] found id: ""
	I1227 20:14:30.119156  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:30.119217  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:30.123169  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:30.123244  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:30.151766  319301 cri.go:96] found id: ""
	I1227 20:14:30.151790  319301 logs.go:282] 0 containers: []
	W1227 20:14:30.151799  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:30.151816  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:30.151828  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:30.169326  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:30.169357  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:30.199380  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:30.199412  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:30.265121  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:30.265163  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:30.356459  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:30.356498  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:30.392984  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:30.393013  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:30.499474  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:30.499511  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:30.571342  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:30.561186   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.563435   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.564195   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.566014   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.566655   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:30.561186   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.563435   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.564195   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.566014   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:30.566655   11850 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:30.571365  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:30.571378  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:30.615172  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:30.615207  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:30.644774  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:30.644803  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:33.172504  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:33.183855  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:33.183927  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:33.214210  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:33.214232  319301 cri.go:96] found id: ""
	I1227 20:14:33.214241  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:33.214307  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:33.218161  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:33.218245  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:33.244477  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:33.244501  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:33.244506  319301 cri.go:96] found id: ""
	I1227 20:14:33.244513  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:33.244574  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:33.248725  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:33.252096  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:33.252166  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:33.284273  319301 cri.go:96] found id: ""
	I1227 20:14:33.284304  319301 logs.go:282] 0 containers: []
	W1227 20:14:33.284317  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:33.284327  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:33.284406  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:33.311094  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:33.311117  319301 cri.go:96] found id: ""
	I1227 20:14:33.311125  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:33.311184  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:33.315375  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:33.315450  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:33.344846  319301 cri.go:96] found id: ""
	I1227 20:14:33.344870  319301 logs.go:282] 0 containers: []
	W1227 20:14:33.344879  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:33.344886  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:33.344945  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:33.370949  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:33.371011  319301 cri.go:96] found id: ""
	I1227 20:14:33.371033  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:33.371093  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:33.375136  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:33.375211  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:33.403339  319301 cri.go:96] found id: ""
	I1227 20:14:33.403361  319301 logs.go:282] 0 containers: []
	W1227 20:14:33.403370  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:33.403385  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:33.403396  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:33.484170  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:33.484207  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:33.516735  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:33.516766  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:33.534421  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:33.534452  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:33.613759  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:33.613800  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:33.651422  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:33.651450  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:33.759905  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:33.759949  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:33.827184  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:33.819142   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.819867   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.821423   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.822059   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.823552   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:33.819142   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.819867   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.821423   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.822059   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:33.823552   11981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:33.827217  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:33.827232  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:33.858891  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:33.858926  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:33.904092  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:33.904128  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:36.431294  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:36.449106  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:36.449178  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:36.480392  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:36.480416  319301 cri.go:96] found id: ""
	I1227 20:14:36.480425  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:36.480481  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:36.485341  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:36.485424  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:36.515111  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:36.515185  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:36.515199  319301 cri.go:96] found id: ""
	I1227 20:14:36.515225  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:36.515283  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:36.519737  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:36.523801  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:36.523877  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:36.550603  319301 cri.go:96] found id: ""
	I1227 20:14:36.550628  319301 logs.go:282] 0 containers: []
	W1227 20:14:36.550637  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:36.550644  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:36.550699  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:36.586466  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:36.586492  319301 cri.go:96] found id: ""
	I1227 20:14:36.586500  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:36.586577  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:36.590067  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:36.590139  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:36.621202  319301 cri.go:96] found id: ""
	I1227 20:14:36.621235  319301 logs.go:282] 0 containers: []
	W1227 20:14:36.621244  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:36.621250  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:36.621308  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:36.647269  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:36.647292  319301 cri.go:96] found id: ""
	I1227 20:14:36.647301  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:36.647379  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:36.651085  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:36.651160  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:36.677749  319301 cri.go:96] found id: ""
	I1227 20:14:36.677778  319301 logs.go:282] 0 containers: []
	W1227 20:14:36.677788  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:36.677804  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:36.677817  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:36.725080  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:36.725110  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:36.755181  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:36.755211  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:36.784468  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:36.784496  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:36.816908  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:36.816940  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:36.834015  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:36.834047  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:36.900869  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:36.892648   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.893851   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.894994   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.895421   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.896907   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:36.892648   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.893851   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.894994   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.895421   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:36.896907   12113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:36.900892  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:36.900908  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:36.960391  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:36.960427  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:37.045275  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:37.045325  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:37.148150  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:37.148188  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:39.676095  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:39.686901  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:39.686981  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:39.713632  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:39.713662  319301 cri.go:96] found id: ""
	I1227 20:14:39.713681  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:39.713758  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:39.717685  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:39.717762  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:39.744240  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:39.744264  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:39.744269  319301 cri.go:96] found id: ""
	I1227 20:14:39.744277  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:39.744330  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:39.748168  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:39.751671  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:39.751770  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:39.781268  319301 cri.go:96] found id: ""
	I1227 20:14:39.781293  319301 logs.go:282] 0 containers: []
	W1227 20:14:39.781302  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:39.781309  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:39.781401  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:39.810785  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:39.810807  319301 cri.go:96] found id: ""
	I1227 20:14:39.810815  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:39.810888  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:39.814715  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:39.814784  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:39.841437  319301 cri.go:96] found id: ""
	I1227 20:14:39.841493  319301 logs.go:282] 0 containers: []
	W1227 20:14:39.841503  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:39.841508  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:39.841573  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:39.868907  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:39.868925  319301 cri.go:96] found id: ""
	I1227 20:14:39.868933  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:39.868987  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:39.872674  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:39.872744  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:39.900867  319301 cri.go:96] found id: ""
	I1227 20:14:39.900943  319301 logs.go:282] 0 containers: []
	W1227 20:14:39.900966  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:39.901013  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:39.901043  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:39.918593  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:39.918625  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:39.949056  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:39.949087  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:39.981788  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:39.981818  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:40.105238  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:40.105377  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:40.191666  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:40.183905   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.184449   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.186006   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.186447   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.187950   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:40.183905   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.184449   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.186006   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.186447   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:40.187950   12226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:40.191684  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:40.191701  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:40.262140  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:40.262180  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:40.310808  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:40.310845  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:40.337783  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:40.337811  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:40.368704  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:40.368733  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:42.951291  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:42.961621  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:42.961714  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:42.996358  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:42.996382  319301 cri.go:96] found id: ""
	I1227 20:14:42.996391  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:42.996476  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:43.000167  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:43.000258  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:43.042517  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:43.042542  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:43.042547  319301 cri.go:96] found id: ""
	I1227 20:14:43.042555  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:43.042636  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:43.046498  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:43.049992  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:43.050069  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:43.076653  319301 cri.go:96] found id: ""
	I1227 20:14:43.076681  319301 logs.go:282] 0 containers: []
	W1227 20:14:43.076690  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:43.076697  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:43.076755  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:43.104355  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:43.104379  319301 cri.go:96] found id: ""
	I1227 20:14:43.104388  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:43.104444  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:43.108064  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:43.108137  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:43.136746  319301 cri.go:96] found id: ""
	I1227 20:14:43.136771  319301 logs.go:282] 0 containers: []
	W1227 20:14:43.136780  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:43.136786  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:43.136856  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:43.167333  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:43.167354  319301 cri.go:96] found id: ""
	I1227 20:14:43.167362  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:43.167417  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:43.171054  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:43.171167  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:43.196510  319301 cri.go:96] found id: ""
	I1227 20:14:43.196539  319301 logs.go:282] 0 containers: []
	W1227 20:14:43.196548  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:43.196562  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:43.196573  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:43.246188  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:43.246222  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:43.280060  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:43.280088  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:43.364679  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:43.364718  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:43.383405  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:43.383434  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:43.412457  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:43.412484  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:43.441225  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:43.441251  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:43.483277  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:43.483305  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:43.587381  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:43.587418  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:43.657966  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:43.648616   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.649341   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.651243   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.652029   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.653574   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:43.648616   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.649341   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.651243   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.652029   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:43.653574   12380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:43.657996  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:43.658011  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:46.217780  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:46.229546  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:46.229622  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:46.255054  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:46.255074  319301 cri.go:96] found id: ""
	I1227 20:14:46.255082  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:46.255135  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:46.258848  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:46.258946  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:46.292684  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:46.292758  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:46.292778  319301 cri.go:96] found id: ""
	I1227 20:14:46.292803  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:46.292889  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:46.296621  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:46.300035  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:46.300104  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:46.325669  319301 cri.go:96] found id: ""
	I1227 20:14:46.325694  319301 logs.go:282] 0 containers: []
	W1227 20:14:46.325703  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:46.325709  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:46.325766  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:46.352094  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:46.352159  319301 cri.go:96] found id: ""
	I1227 20:14:46.352182  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:46.352268  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:46.355963  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:46.356077  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:46.381620  319301 cri.go:96] found id: ""
	I1227 20:14:46.381646  319301 logs.go:282] 0 containers: []
	W1227 20:14:46.381656  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:46.381662  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:46.381738  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:46.410104  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:46.410127  319301 cri.go:96] found id: ""
	I1227 20:14:46.410135  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:46.410191  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:46.413648  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:46.413715  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:46.440709  319301 cri.go:96] found id: ""
	I1227 20:14:46.440734  319301 logs.go:282] 0 containers: []
	W1227 20:14:46.440745  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:46.440759  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:46.440781  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:46.469916  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:46.469945  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:46.571819  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:46.571854  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:46.590503  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:46.590531  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:46.624094  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:46.624120  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:46.655415  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:46.655444  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:46.727967  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:46.719794   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.720498   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.722193   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.722714   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.724244   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:46.719794   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.720498   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.722193   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.722714   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:46.724244   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:46.727989  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:46.728003  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:46.787862  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:46.787899  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:46.848761  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:46.848797  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:46.883658  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:46.883687  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:49.466063  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:49.476365  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:49.476460  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:49.502643  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:49.502665  319301 cri.go:96] found id: ""
	I1227 20:14:49.502673  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:49.502727  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:49.506369  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:49.506443  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:49.532399  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:49.532421  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:49.532427  319301 cri.go:96] found id: ""
	I1227 20:14:49.532435  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:49.532488  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:49.536133  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:49.539580  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:49.539645  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:49.566501  319301 cri.go:96] found id: ""
	I1227 20:14:49.566528  319301 logs.go:282] 0 containers: []
	W1227 20:14:49.566537  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:49.566544  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:49.566605  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:49.602221  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:49.602245  319301 cri.go:96] found id: ""
	I1227 20:14:49.602254  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:49.602316  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:49.606305  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:49.606375  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:49.632906  319301 cri.go:96] found id: ""
	I1227 20:14:49.632931  319301 logs.go:282] 0 containers: []
	W1227 20:14:49.632941  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:49.632946  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:49.633012  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:49.660593  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:49.660616  319301 cri.go:96] found id: ""
	I1227 20:14:49.660625  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:49.660683  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:49.664343  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:49.664414  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:49.691030  319301 cri.go:96] found id: ""
	I1227 20:14:49.691093  319301 logs.go:282] 0 containers: []
	W1227 20:14:49.691110  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:49.691125  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:49.691137  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:49.786516  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:49.786552  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:49.837581  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:49.837615  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:49.923089  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:49.923126  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:49.964776  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:49.964806  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:49.984138  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:49.984166  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:50.053988  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:50.045799   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.046531   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.048064   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.048564   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.050125   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:50.045799   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.046531   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.048064   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.048564   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:50.050125   12617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:50.054052  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:50.054072  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:50.080753  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:50.080847  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:50.160335  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:50.160373  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:50.189801  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:50.189831  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:52.722382  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:52.732860  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:52.732954  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:52.759105  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:52.759129  319301 cri.go:96] found id: ""
	I1227 20:14:52.759140  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:52.759192  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:52.763086  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:52.763152  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:52.789342  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:52.789365  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:52.789370  319301 cri.go:96] found id: ""
	I1227 20:14:52.789378  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:52.789441  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:52.793045  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:52.796599  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:52.796677  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:52.821951  319301 cri.go:96] found id: ""
	I1227 20:14:52.821975  319301 logs.go:282] 0 containers: []
	W1227 20:14:52.821984  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:52.821990  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:52.822048  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:52.848207  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:52.848227  319301 cri.go:96] found id: ""
	I1227 20:14:52.848235  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:52.848290  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:52.852016  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:52.852114  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:52.878718  319301 cri.go:96] found id: ""
	I1227 20:14:52.878752  319301 logs.go:282] 0 containers: []
	W1227 20:14:52.878761  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:52.878768  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:52.878826  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:52.905928  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:52.906001  319301 cri.go:96] found id: ""
	I1227 20:14:52.906023  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:52.906113  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:52.910178  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:52.910250  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:52.937172  319301 cri.go:96] found id: ""
	I1227 20:14:52.937209  319301 logs.go:282] 0 containers: []
	W1227 20:14:52.937218  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:52.937231  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:52.937249  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:52.966131  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:52.966162  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:53.003464  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:53.003490  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:53.021719  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:53.021777  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:53.091033  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:53.081906   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.083382   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.084066   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.085728   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.086021   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:53.081906   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.083382   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.084066   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.085728   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:53.086021   12743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:53.091054  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:53.091067  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:53.153878  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:53.153918  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:53.184615  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:53.184643  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:53.268968  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:53.269005  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:53.374253  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:53.374287  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:53.403008  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:53.403044  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:55.952353  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:55.962631  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:55.962719  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:55.995078  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:55.995100  319301 cri.go:96] found id: ""
	I1227 20:14:55.995108  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:55.995174  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:55.999787  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:55.999857  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:56.034785  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:56.034809  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:56.034814  319301 cri.go:96] found id: ""
	I1227 20:14:56.034821  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:56.034886  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:56.039026  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:56.043109  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:56.043239  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:56.076322  319301 cri.go:96] found id: ""
	I1227 20:14:56.076349  319301 logs.go:282] 0 containers: []
	W1227 20:14:56.076358  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:56.076365  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:56.076450  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:56.105910  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:56.105937  319301 cri.go:96] found id: ""
	I1227 20:14:56.105945  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:56.106024  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:56.109833  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:56.109951  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:56.136658  319301 cri.go:96] found id: ""
	I1227 20:14:56.136681  319301 logs.go:282] 0 containers: []
	W1227 20:14:56.136690  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:56.136696  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:56.136751  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:56.162379  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:56.162402  319301 cri.go:96] found id: ""
	I1227 20:14:56.162409  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:56.162464  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:56.165959  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:56.166030  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:56.193023  319301 cri.go:96] found id: ""
	I1227 20:14:56.193057  319301 logs.go:282] 0 containers: []
	W1227 20:14:56.193066  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:56.193097  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:56.193131  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:56.219549  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:56.219577  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:56.255190  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:56.255218  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:56.326655  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:56.326690  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:56.369967  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:56.370002  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:56.449778  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:56.449815  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:56.481804  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:56.481833  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:56.580473  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:56.580507  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:56.597748  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:56.597781  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:56.675164  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:56.667282   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.668004   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.669569   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.670031   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.671487   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:56.667282   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.668004   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.669569   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.670031   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:56.671487   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:56.675187  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:56.675210  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:59.204907  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:14:59.215384  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:14:59.215464  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:14:59.241010  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:59.241041  319301 cri.go:96] found id: ""
	I1227 20:14:59.241056  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:14:59.241157  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:59.245340  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:14:59.245433  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:14:59.282857  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:59.282880  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:59.282886  319301 cri.go:96] found id: ""
	I1227 20:14:59.282893  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:14:59.282945  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:59.286535  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:59.289810  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:14:59.289879  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:14:59.317473  319301 cri.go:96] found id: ""
	I1227 20:14:59.317509  319301 logs.go:282] 0 containers: []
	W1227 20:14:59.317517  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:14:59.317524  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:14:59.317593  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:14:59.350932  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:59.350952  319301 cri.go:96] found id: ""
	I1227 20:14:59.350961  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:14:59.351015  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:59.354698  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:14:59.354768  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:14:59.381626  319301 cri.go:96] found id: ""
	I1227 20:14:59.381660  319301 logs.go:282] 0 containers: []
	W1227 20:14:59.381669  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:14:59.381675  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:14:59.381730  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:14:59.408107  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:14:59.408130  319301 cri.go:96] found id: ""
	I1227 20:14:59.408140  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:14:59.408216  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:14:59.411771  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:14:59.411846  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:14:59.436633  319301 cri.go:96] found id: ""
	I1227 20:14:59.436660  319301 logs.go:282] 0 containers: []
	W1227 20:14:59.436669  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:14:59.436683  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:14:59.436695  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:14:59.532932  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:14:59.532968  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:14:59.601543  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:14:59.593318   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.594069   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.595883   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.596441   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.597498   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:14:59.593318   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.594069   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.595883   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.596441   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:14:59.597498   12978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:14:59.601573  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:14:59.601587  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:14:59.630627  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:14:59.630653  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:14:59.691462  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:14:59.691537  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:14:59.736271  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:14:59.736311  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:14:59.763317  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:14:59.763349  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:14:59.845478  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:14:59.845512  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:14:59.877233  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:14:59.877259  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:14:59.894077  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:14:59.894108  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:02.425928  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:15:02.437025  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:15:02.437097  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:15:02.462847  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:02.462876  319301 cri.go:96] found id: ""
	I1227 20:15:02.462885  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:15:02.462941  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:02.466840  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:15:02.466915  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:15:02.493867  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:02.493889  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:02.493895  319301 cri.go:96] found id: ""
	I1227 20:15:02.493903  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:15:02.493986  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:02.497849  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:02.501391  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:15:02.501500  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:15:02.531735  319301 cri.go:96] found id: ""
	I1227 20:15:02.531761  319301 logs.go:282] 0 containers: []
	W1227 20:15:02.531771  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:15:02.531779  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:15:02.531858  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:15:02.557699  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:02.557723  319301 cri.go:96] found id: ""
	I1227 20:15:02.557732  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:15:02.557792  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:02.561785  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:15:02.561860  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:15:02.588584  319301 cri.go:96] found id: ""
	I1227 20:15:02.588611  319301 logs.go:282] 0 containers: []
	W1227 20:15:02.588620  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:15:02.588665  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:15:02.588727  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:15:02.626246  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:02.626270  319301 cri.go:96] found id: ""
	I1227 20:15:02.626279  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:15:02.626332  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:02.630342  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:15:02.630416  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:15:02.658875  319301 cri.go:96] found id: ""
	I1227 20:15:02.658899  319301 logs.go:282] 0 containers: []
	W1227 20:15:02.658908  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:15:02.658940  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:15:02.658959  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:15:02.760567  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:15:02.760609  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:15:02.779705  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:15:02.779737  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:15:02.864780  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:15:02.844552   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.845307   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.847070   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.847814   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.850808   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:15:02.844552   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.845307   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.847070   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.847814   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:02.850808   13107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:15:02.864807  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:15:02.864822  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:02.930564  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:15:02.930600  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:02.956647  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:15:02.956674  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:02.988569  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:15:02.988644  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:15:03.080368  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:15:03.080404  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:03.109214  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:15:03.109254  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:03.154097  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:15:03.154130  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:15:05.702871  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:15:05.713737  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:15:05.713808  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:15:05.747061  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:05.747087  319301 cri.go:96] found id: ""
	I1227 20:15:05.747097  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:15:05.747152  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:05.751069  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:15:05.751142  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:15:05.778241  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:05.778264  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:05.778269  319301 cri.go:96] found id: ""
	I1227 20:15:05.778276  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:15:05.778330  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:05.781970  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:05.785615  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:15:05.785684  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:15:05.811372  319301 cri.go:96] found id: ""
	I1227 20:15:05.811405  319301 logs.go:282] 0 containers: []
	W1227 20:15:05.811419  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:15:05.811426  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:15:05.811487  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:15:05.837308  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:05.837331  319301 cri.go:96] found id: ""
	I1227 20:15:05.837339  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:15:05.837394  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:05.841435  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:15:05.841563  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:15:05.872145  319301 cri.go:96] found id: ""
	I1227 20:15:05.872175  319301 logs.go:282] 0 containers: []
	W1227 20:15:05.872184  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:15:05.872191  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:15:05.872248  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:15:05.905843  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:05.905863  319301 cri.go:96] found id: ""
	I1227 20:15:05.905872  319301 logs.go:282] 1 containers: [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:15:05.905928  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:05.909362  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:15:05.909433  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:15:05.937743  319301 cri.go:96] found id: ""
	I1227 20:15:05.937768  319301 logs.go:282] 0 containers: []
	W1227 20:15:05.937776  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:15:05.937789  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:15:05.937805  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:15:05.956337  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:15:05.956373  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:06.027819  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:15:06.027857  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:06.055387  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:15:06.055417  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:15:06.087848  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:15:06.087876  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:15:06.191189  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:15:06.191225  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:15:06.260486  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:15:06.252420   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.253150   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.254651   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.255097   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.256545   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:15:06.252420   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.253150   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.254651   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.255097   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:06.256545   13261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:15:06.260512  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:15:06.260527  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:06.289045  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:15:06.289074  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:06.340456  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:15:06.340493  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:06.367177  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:15:06.367209  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:15:08.948368  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:15:08.960093  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:15:08.960163  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:15:09.004464  319301 cri.go:96] found id: "a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:09.004531  319301 cri.go:96] found id: ""
	I1227 20:15:09.004541  319301 logs.go:282] 1 containers: [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722]
	I1227 20:15:09.004627  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.008790  319301 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 20:15:09.008905  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:15:09.041635  319301 cri.go:96] found id: "9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:09.041705  319301 cri.go:96] found id: "58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:09.041727  319301 cri.go:96] found id: ""
	I1227 20:15:09.041750  319301 logs.go:282] 2 containers: [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4]
	I1227 20:15:09.041834  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.046563  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.050558  319301 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 20:15:09.050679  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:15:09.079147  319301 cri.go:96] found id: ""
	I1227 20:15:09.079218  319301 logs.go:282] 0 containers: []
	W1227 20:15:09.079241  319301 logs.go:284] No container was found matching "coredns"
	I1227 20:15:09.079265  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:15:09.079350  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:15:09.115659  319301 cri.go:96] found id: "9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:09.115728  319301 cri.go:96] found id: ""
	I1227 20:15:09.115749  319301 logs.go:282] 1 containers: [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375]
	I1227 20:15:09.115833  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.119927  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:15:09.120060  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:15:09.148832  319301 cri.go:96] found id: ""
	I1227 20:15:09.148905  319301 logs.go:282] 0 containers: []
	W1227 20:15:09.148927  319301 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:15:09.148951  319301 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:15:09.149036  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:15:09.193967  319301 cri.go:96] found id: "d4599a49838601138827173ae16d1700bf9c506a4f9611f8f2415da1ea387070"
	I1227 20:15:09.194039  319301 cri.go:96] found id: "65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:09.194058  319301 cri.go:96] found id: ""
	I1227 20:15:09.194083  319301 logs.go:282] 2 containers: [d4599a49838601138827173ae16d1700bf9c506a4f9611f8f2415da1ea387070 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147]
	I1227 20:15:09.194168  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.198186  319301 ssh_runner.go:195] Run: which crictl
	I1227 20:15:09.202291  319301 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 20:15:09.202369  319301 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:15:09.233220  319301 cri.go:96] found id: ""
	I1227 20:15:09.233256  319301 logs.go:282] 0 containers: []
	W1227 20:15:09.233266  319301 logs.go:284] No container was found matching "kindnet"
	I1227 20:15:09.233275  319301 logs.go:123] Gathering logs for container status ...
	I1227 20:15:09.233286  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 20:15:09.265208  319301 logs.go:123] Gathering logs for kubelet ...
	I1227 20:15:09.265236  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:15:09.366491  319301 logs.go:123] Gathering logs for dmesg ...
	I1227 20:15:09.366527  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:15:09.385049  319301 logs.go:123] Gathering logs for kube-apiserver [a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722] ...
	I1227 20:15:09.385152  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a0c4c451f03ea12453365734709859ce6be111c36e0774709d7c83e0fe2d4722"
	I1227 20:15:09.416669  319301 logs.go:123] Gathering logs for etcd [9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd] ...
	I1227 20:15:09.416697  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9eaaa41abfbb0e38dbab8a7684df11035bf9e0b0dac1a380fd33064bbc0532bd"
	I1227 20:15:09.477821  319301 logs.go:123] Gathering logs for kube-scheduler [9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375] ...
	I1227 20:15:09.477862  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9091a550cd0b1f4cb52e876968db8eaf3fdbb005e1078ee15d4c546ee5458375"
	I1227 20:15:09.503656  319301 logs.go:123] Gathering logs for kube-controller-manager [d4599a49838601138827173ae16d1700bf9c506a4f9611f8f2415da1ea387070] ...
	I1227 20:15:09.503682  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4599a49838601138827173ae16d1700bf9c506a4f9611f8f2415da1ea387070"
	I1227 20:15:09.529517  319301 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:15:09.529549  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:15:09.594024  319301 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:15:09.583997   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.584731   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.586847   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.587585   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.589403   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:15:09.583997   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.584731   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.586847   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.587585   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:15:09.589403   13413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:15:09.594044  319301 logs.go:123] Gathering logs for etcd [58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4] ...
	I1227 20:15:09.594113  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 58f11ec87eab7e4a939e550dc33dd60d60dd53b54ea3e7070da8ecca7796cdd4"
	I1227 20:15:09.641021  319301 logs.go:123] Gathering logs for kube-controller-manager [65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147] ...
	I1227 20:15:09.641054  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 65ed2f7a44a987aa04d5228ac873d8543a8d0131ed9d0401ef21a96a331d4147"
	I1227 20:15:09.671469  319301 logs.go:123] Gathering logs for CRI-O ...
	I1227 20:15:09.671497  319301 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 20:15:12.247384  319301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:15:12.261411  319301 out.go:203] 
	W1227 20:15:12.264240  319301 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1227 20:15:12.264279  319301 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1227 20:15:12.264291  319301 out.go:285] * Related issues:
	W1227 20:15:12.264307  319301 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1227 20:15:12.264322  319301 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1227 20:15:12.272645  319301 out.go:203] 
	
	
	==> CRI-O <==
	Dec 27 20:09:47 ha-422549 crio[668]: time="2025-12-27T20:09:47.963961573Z" level=info msg="Started container" PID=1443 containerID=810850466f08e002011f0d991e32eb0109be47db69714d6e333a070593589ffc description=kube-system/kube-controller-manager-ha-422549/kube-controller-manager id=4c2fe289-ef21-4410-b80d-903288016926 name=/runtime.v1.RuntimeService/StartContainer sandboxID=38efda04ee9aef0e7908e0db5c261b87e7e5100a62c84932b9b7ba0d61a4d0b2
	Dec 27 20:09:49 ha-422549 conmon[1210]: conmon b67722550482449b8daa <ninfo>: container 1212 exited with status 1
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.376459079Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=69065085-21ea-41c3-802a-261d89524c56 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.377242719Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1df6dc90-5ba0-4b74-852c-4cf7aefb23f0 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.378198249Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=cee7eb55-89b4-4b4e-840f-5adab55395f1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.378318031Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.390342199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.390574781Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d51da34059b2d7dc5c5989964247fd01aabd5fa31dd489fcbed003c93c5d0a79/merged/etc/passwd: no such file or directory"
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.390683445Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d51da34059b2d7dc5c5989964247fd01aabd5fa31dd489fcbed003c93c5d0a79/merged/etc/group: no such file or directory"
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.391133051Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.407049484Z" level=info msg="Created container 39052e86fac88d7cd6484a6d581397a09660e8626a668440758c42943ffc493c: kube-system/storage-provisioner/storage-provisioner" id=cee7eb55-89b4-4b4e-840f-5adab55395f1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.408066239Z" level=info msg="Starting container: 39052e86fac88d7cd6484a6d581397a09660e8626a668440758c42943ffc493c" id=a1f177fc-11ea-4dd9-a25c-b20aa52a0229 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:09:49 ha-422549 crio[668]: time="2025-12-27T20:09:49.409701176Z" level=info msg="Started container" PID=1456 containerID=39052e86fac88d7cd6484a6d581397a09660e8626a668440758c42943ffc493c description=kube-system/storage-provisioner/storage-provisioner id=a1f177fc-11ea-4dd9-a25c-b20aa52a0229 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0df0f45f11cf21c22800d785af6947dd7131cfe5dea11e9e2d6c844bc352c0a
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.443600032Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.447069767Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.447101142Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.44712181Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.451793967Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.451824431Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.451847585Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.455975682Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.456009075Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.456031754Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.458926316Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:09:59 ha-422549 crio[668]: time="2025-12-27T20:09:59.45895939Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	39052e86fac88       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Running             storage-provisioner       2                   c0df0f45f11cf       storage-provisioner                 kube-system
	810850466f08e       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   5 minutes ago       Running             kube-controller-manager   5                   38efda04ee9ae       kube-controller-manager-ha-422549   kube-system
	deb6daab23cec       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf   6 minutes ago       Running             coredns                   1                   72c204b703743       coredns-7d764666f9-n5d9d            kube-system
	43a1d9657d3c8       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf   6 minutes ago       Running             coredns                   1                   270010189bb39       coredns-7d764666f9-mf5xw            kube-system
	b677225504824       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   6 minutes ago       Exited              storage-provisioner       1                   c0df0f45f11cf       storage-provisioner                 kube-system
	10122e623612b       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   6 minutes ago       Running             busybox                   1                   b045d6d9411c4       busybox-769dd8b7dd-k7ks6            default
	790f2c013c89e       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   6 minutes ago       Running             kindnet-cni               1                   963cd2abb4546       kindnet-qkqmv                       kube-system
	0dc7fc3f72aac       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   6 minutes ago       Running             kube-proxy                1                   d7813942f329c       kube-proxy-mhmmn                    kube-system
	200f949dea5c6       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   6 minutes ago       Exited              kube-controller-manager   4                   38efda04ee9ae       kube-controller-manager-ha-422549   kube-system
	a2c772463ab69       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   6 minutes ago       Running             kube-apiserver            2                   8bfe137c6f9b3       kube-apiserver-ha-422549            kube-system
	c3f87ac29708d       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   7 minutes ago       Exited              kube-apiserver            1                   8bfe137c6f9b3       kube-apiserver-ha-422549            kube-system
	79f65bc2e1dbc       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   7 minutes ago       Running             etcd                      1                   f60298eb8266f       etcd-ha-422549                      kube-system
	dd811e752da4c       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   7 minutes ago       Running             kube-scheduler            1                   ce9729522201c       kube-scheduler-ha-422549            kube-system
	feeed30c26dbb       28c5662932f6032ee4faba083d9c2af90232797e1d4f89d9892cb92b26fec299   7 minutes ago       Running             kube-vip                  0                   1eca96f45960b       kube-vip-ha-422549                  kube-system
	
	
	==> coredns [43a1d9657d3c893603414e1fad6c7f34c4c4ed3f7f0f2369eb8490cc9ea240ec] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:47173 - 60767 "HINFO IN 8301766955164973522.8999772451794302158. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029591992s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> coredns [deb6daab23cece988ebd68d94f1237fabdfd9ad9729504264927da30e4c1b5a0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:35210 - 10149 "HINFO IN 5398190722329959175.7924831905691569149. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027114236s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-422549
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_03_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:03:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:15:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:11:30 +0000   Sat, 27 Dec 2025 20:03:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:11:30 +0000   Sat, 27 Dec 2025 20:03:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:11:30 +0000   Sat, 27 Dec 2025 20:03:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:11:30 +0000   Sat, 27 Dec 2025 20:09:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-422549
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                acd356f3-8732-454f-9ea5-4ebb90b80a04
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-k7ks6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7d764666f9-mf5xw             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 coredns-7d764666f9-n5d9d             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 etcd-ha-422549                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-qkqmv                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-422549             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-422549    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-mhmmn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-422549             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-422549                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  8m35s  node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  5m36s  node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	
	
	Name:               ha-422549-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_04_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:04:00 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:06:58 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 27 Dec 2025 20:06:47 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 27 Dec 2025 20:06:47 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 27 Dec 2025 20:06:47 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 27 Dec 2025 20:06:47 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-422549-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                279e934d-6d34-4a11-83f0-a7f36011d6a2
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-v6vks                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-422549-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-5wczs                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-422549-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-422549-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-nqr7h                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-422549-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-422549-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  8m35s  node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  5m36s  node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  NodeNotReady    4m46s  node-controller  Node ha-422549-m02 status is now: NodeNotReady
	
	
	Name:               ha-422549-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_04_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:04:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:06:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 27 Dec 2025 20:06:39 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 27 Dec 2025 20:06:39 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 27 Dec 2025 20:06:39 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 27 Dec 2025 20:06:39 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-422549-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                dd826b6d-21ec-45c4-b392-2d4b9b2daddb
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-qcz4b                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-422549-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-28svl                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-422549-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-422549-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-cg4z5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-422549-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-422549-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  8m35s  node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  5m36s  node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  NodeNotReady    4m46s  node-controller  Node ha-422549-m03 status is now: NodeNotReady
	
	
	Name:               ha-422549-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_05_33_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:05:32 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:06:44 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 27 Dec 2025 20:06:44 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 27 Dec 2025 20:06:44 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 27 Dec 2025 20:06:44 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 27 Dec 2025 20:06:44 +0000   Sat, 27 Dec 2025 20:10:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-422549-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                45c0e480-898e-46d5-83ce-c457d7b4b021
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4hl7v       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m53s
	  kube-system                 kube-proxy-kscg6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  9m51s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  9m51s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  9m49s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  8m35s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  5m36s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  NodeNotReady    4m46s  node-controller  Node ha-422549-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Dec27 19:24] overlayfs: idmapped layers are currently not supported
	[Dec27 19:25] overlayfs: idmapped layers are currently not supported
	[Dec27 19:26] overlayfs: idmapped layers are currently not supported
	[ +16.831724] overlayfs: idmapped layers are currently not supported
	[Dec27 19:27] overlayfs: idmapped layers are currently not supported
	[Dec27 19:28] overlayfs: idmapped layers are currently not supported
	[ +28.388596] overlayfs: idmapped layers are currently not supported
	[Dec27 19:29] overlayfs: idmapped layers are currently not supported
	[  +9.242530] overlayfs: idmapped layers are currently not supported
	[Dec27 19:30] overlayfs: idmapped layers are currently not supported
	[ +11.577339] overlayfs: idmapped layers are currently not supported
	[Dec27 19:32] overlayfs: idmapped layers are currently not supported
	[ +19.186532] overlayfs: idmapped layers are currently not supported
	[Dec27 19:34] overlayfs: idmapped layers are currently not supported
	[Dec27 19:54] kauditd_printk_skb: 8 callbacks suppressed
	[Dec27 19:56] overlayfs: idmapped layers are currently not supported
	[Dec27 19:59] overlayfs: idmapped layers are currently not supported
	[Dec27 20:00] overlayfs: idmapped layers are currently not supported
	[Dec27 20:03] overlayfs: idmapped layers are currently not supported
	[ +31.019083] overlayfs: idmapped layers are currently not supported
	[Dec27 20:04] overlayfs: idmapped layers are currently not supported
	[Dec27 20:05] overlayfs: idmapped layers are currently not supported
	[Dec27 20:06] overlayfs: idmapped layers are currently not supported
	[Dec27 20:07] overlayfs: idmapped layers are currently not supported
	[  +3.687478] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [79f65bc2e1dbcf7ebe07acaf2143b45f059da3390e107fc3eb87595ccc5f920d] <==
	{"level":"warn","ts":"2025-12-27T20:15:25.441802Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.491917Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.502830Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.506212Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.512244Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.521342Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.530068Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.534269Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.537969Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.541520Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.541587Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.548861Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.558041Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.561927Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.565106Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.569113Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.589634Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.596913Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.601965Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.605602Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.608381Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.610083Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.619044Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.628490Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-12-27T20:15:25.641599Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:15:25 up  1:57,  0 user,  load average: 0.59, 1.07, 1.34
	Linux ha-422549 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [790f2c013c89e320d6ae1872fcbeb0dcede9e548fae087919a1d710b26587af9] <==
	I1227 20:14:49.450546       1 main.go:324] Node ha-422549-m04 has CIDR [10.244.3.0/24] 
	I1227 20:14:59.445558       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 20:14:59.445661       1 main.go:301] handling current node
	I1227 20:14:59.445700       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1227 20:14:59.445735       1 main.go:324] Node ha-422549-m02 has CIDR [10.244.1.0/24] 
	I1227 20:14:59.445899       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1227 20:14:59.445935       1 main.go:324] Node ha-422549-m03 has CIDR [10.244.2.0/24] 
	I1227 20:14:59.446020       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1227 20:14:59.446055       1 main.go:324] Node ha-422549-m04 has CIDR [10.244.3.0/24] 
	I1227 20:15:09.445623       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 20:15:09.445660       1 main.go:301] handling current node
	I1227 20:15:09.445676       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1227 20:15:09.445682       1 main.go:324] Node ha-422549-m02 has CIDR [10.244.1.0/24] 
	I1227 20:15:09.445872       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1227 20:15:09.445881       1 main.go:324] Node ha-422549-m03 has CIDR [10.244.2.0/24] 
	I1227 20:15:09.446114       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1227 20:15:09.446126       1 main.go:324] Node ha-422549-m04 has CIDR [10.244.3.0/24] 
	I1227 20:15:19.450346       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1227 20:15:19.450445       1 main.go:324] Node ha-422549-m04 has CIDR [10.244.3.0/24] 
	I1227 20:15:19.450623       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 20:15:19.450662       1 main.go:301] handling current node
	I1227 20:15:19.450700       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1227 20:15:19.450749       1 main.go:324] Node ha-422549-m02 has CIDR [10.244.1.0/24] 
	I1227 20:15:19.450842       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1227 20:15:19.450875       1 main.go:324] Node ha-422549-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [a2c772463ab69455651df640481fbedb03fe6400b56096056428e79c07be9499] <==
	I1227 20:09:16.090173       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:09:16.142608       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 20:09:16.165012       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:09:16.188215       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:16.247286       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 20:09:17.588850       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:09:17.588862       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 20:09:17.591046       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:09:17.591196       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:17.591213       1 policy_source.go:248] refreshing policies
	I1227 20:09:17.594498       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:09:17.632882       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 20:09:18.590962       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 20:09:18.719267       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:09:18.730017       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1227 20:09:18.736565       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1227 20:09:18.757260       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:09:18.776199       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:09:18.793727       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 20:09:18.793809       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	W1227 20:09:18.871915       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	W1227 20:09:38.848605       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1227 20:09:50.148007       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:09:50.298023       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:10:40.117662       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-apiserver [c3f87ac29708d39b5580f953e8ccc765b36b830cf405bc7750b8afe798a15a77] <==
	{"level":"warn","ts":"2025-12-27T20:08:34.277834Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400203fc20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277853Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400144c3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277870Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400203f2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277886Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002670b40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277902Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40029112c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277921Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001cc2f00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277917Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021472c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277951Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002a345a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277938Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002cbb2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277969Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002671c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.277982Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026703c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278004Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f51c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278007Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40029fd680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278023Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400144cd20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278027Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002d0ef00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278040Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002cba960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278044Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002d0ef00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278056Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026714a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278062Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000ea3c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278071Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400102d2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-12-27T20:08:34.278373Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021472c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	F1227 20:08:39.300772       1 hooks.go:204] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	{"level":"warn","ts":"2025-12-27T20:08:39.399795Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400102d2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	E1227 20:08:39.400034       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	
	
	==> kube-controller-manager [200f949dea5c60d38a5d90e0270e6343a89f068bd2083ee55915c81023b0e023] <==
	I1227 20:08:47.677940       1 serving.go:386] Generated self-signed cert in-memory
	I1227 20:08:47.685798       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1227 20:08:47.685893       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:08:47.687365       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1227 20:08:47.687564       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1227 20:08:47.687645       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1227 20:08:47.687811       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1227 20:08:57.704670       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [810850466f08e002011f0d991e32eb0109be47db69714d6e333a070593589ffc] <==
	I1227 20:09:49.817998       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.818055       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.818125       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.818182       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.818296       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.818398       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 20:09:49.823879       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.824187       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.824238       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.824323       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.826908       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m04"
	I1227 20:09:49.826980       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549"
	I1227 20:09:49.827019       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m02"
	I1227 20:09:49.827146       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m03"
	I1227 20:09:49.831582       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.831626       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.831651       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.837170       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 20:09:49.903784       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.914954       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:49.915054       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:09:49.915069       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:10:39.887314       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-422549-m04"
	I1227 20:10:39.888758       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-422549-m04"
	I1227 20:10:40.332581       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="PartialDisruption"
	
	
	==> kube-proxy [0dc7fc3f72aac5f705d9afdbd65e7c9da34760b5dcbc880ecf6236b8d0c7a88c] <==
	I1227 20:09:19.404089       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:09:19.491223       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:09:19.592597       1 shared_informer.go:377] "Caches are synced"
	I1227 20:09:19.592728       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1227 20:09:19.592858       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:09:19.644888       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:09:19.644944       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:09:19.649692       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:09:19.649993       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:09:19.650014       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:09:19.652082       1 config.go:200] "Starting service config controller"
	I1227 20:09:19.652103       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:09:19.652121       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:09:19.652124       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:09:19.652134       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:09:19.652138       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:09:19.652805       1 config.go:309] "Starting node config controller"
	I1227 20:09:19.652821       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:09:19.652829       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:09:19.753198       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 20:09:19.753207       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:09:19.753242       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dd811e752da4c2025246e605ecc1690aba8141353e20fb91cdad4468a1c059f9] <==
	E1227 20:08:19.506524       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 20:08:19.569107       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:08:20.320229       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:08:20.376812       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:08:21.129930       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 20:08:39.022443       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 20:08:43.570864       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:08:47.134070       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:08:48.738392       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 20:08:49.986460       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 20:08:49.987992       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 20:08:50.727843       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 20:08:50.956450       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:08:51.960069       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:08:53.165271       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:08:57.344100       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 20:08:59.543840       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:09:01.253158       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:09:01.270041       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:09:01.345742       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:09:01.466100       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 20:09:02.611833       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:09:09.548910       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:09:10.555054       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	I1227 20:09:56.031915       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:10:29 ha-422549 kubelet[804]: I1227 20:10:29.927768     804 kubelet.go:3323] "Trying to delete pod" pod="kube-system/kube-vip-ha-422549" podUID="27494a9a-1459-4c40-99d3-c3e21df433ef"
	Dec 27 20:10:29 ha-422549 kubelet[804]: I1227 20:10:29.944622     804 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-422549"
	Dec 27 20:10:29 ha-422549 kubelet[804]: I1227 20:10:29.944659     804 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-422549"
	Dec 27 20:11:02 ha-422549 kubelet[804]: E1227 20:11:02.926814     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:11:12 ha-422549 kubelet[804]: E1227 20:11:12.927477     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:11:13 ha-422549 kubelet[804]: E1227 20:11:13.926597     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:11:14 ha-422549 kubelet[804]: E1227 20:11:14.926505     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-ha-422549" containerName="kube-scheduler"
	Dec 27 20:11:33 ha-422549 kubelet[804]: E1227 20:11:33.928211     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	Dec 27 20:11:45 ha-422549 kubelet[804]: E1227 20:11:45.927376     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-ha-422549" containerName="etcd"
	Dec 27 20:12:25 ha-422549 kubelet[804]: E1227 20:12:25.926700     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:12:39 ha-422549 kubelet[804]: E1227 20:12:39.927819     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:12:41 ha-422549 kubelet[804]: E1227 20:12:41.928937     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:12:44 ha-422549 kubelet[804]: E1227 20:12:44.927340     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-ha-422549" containerName="kube-scheduler"
	Dec 27 20:12:52 ha-422549 kubelet[804]: E1227 20:12:52.926348     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	Dec 27 20:13:04 ha-422549 kubelet[804]: E1227 20:13:04.927081     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-ha-422549" containerName="etcd"
	Dec 27 20:13:35 ha-422549 kubelet[804]: E1227 20:13:35.927017     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:13:53 ha-422549 kubelet[804]: E1227 20:13:53.926931     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:14:05 ha-422549 kubelet[804]: E1227 20:14:05.927026     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	Dec 27 20:14:09 ha-422549 kubelet[804]: E1227 20:14:09.926884     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:14:11 ha-422549 kubelet[804]: E1227 20:14:11.927165     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-ha-422549" containerName="kube-scheduler"
	Dec 27 20:14:21 ha-422549 kubelet[804]: E1227 20:14:21.927398     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-ha-422549" containerName="etcd"
	Dec 27 20:14:55 ha-422549 kubelet[804]: E1227 20:14:55.927938     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:15:04 ha-422549 kubelet[804]: E1227 20:15:04.926424     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:15:16 ha-422549 kubelet[804]: E1227 20:15:16.927222     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-ha-422549" containerName="kube-scheduler"
	Dec 27 20:15:20 ha-422549 kubelet[804]: E1227 20:15:20.926597     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-422549 -n ha-422549
helpers_test.go:270: (dbg) Run:  kubectl --context ha-422549 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (4.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (14.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-422549 stop --alsologtostderr -v 5: (13.930387105s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5: exit status 7 (138.046655ms)

                                                
                                                
-- stdout --
	ha-422549
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-422549-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-422549-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-422549-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:15:41.893235  337052 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:15:41.893442  337052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:15:41.893536  337052 out.go:374] Setting ErrFile to fd 2...
	I1227 20:15:41.893557  337052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:15:41.893855  337052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:15:41.894093  337052 out.go:368] Setting JSON to false
	I1227 20:15:41.894153  337052 mustload.go:66] Loading cluster: ha-422549
	I1227 20:15:41.894197  337052 notify.go:221] Checking for updates...
	I1227 20:15:41.894647  337052 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:41.894686  337052 status.go:174] checking status of ha-422549 ...
	I1227 20:15:41.895261  337052 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:15:41.915066  337052 status.go:371] ha-422549 host status = "Stopped" (err=<nil>)
	I1227 20:15:41.915093  337052 status.go:384] host is not running, skipping remaining checks
	I1227 20:15:41.915100  337052 status.go:176] ha-422549 status: &{Name:ha-422549 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:15:41.915130  337052 status.go:174] checking status of ha-422549-m02 ...
	I1227 20:15:41.915444  337052 cli_runner.go:164] Run: docker container inspect ha-422549-m02 --format={{.State.Status}}
	I1227 20:15:41.940277  337052 status.go:371] ha-422549-m02 host status = "Stopped" (err=<nil>)
	I1227 20:15:41.940302  337052 status.go:384] host is not running, skipping remaining checks
	I1227 20:15:41.940310  337052 status.go:176] ha-422549-m02 status: &{Name:ha-422549-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:15:41.940359  337052 status.go:174] checking status of ha-422549-m03 ...
	I1227 20:15:41.940667  337052 cli_runner.go:164] Run: docker container inspect ha-422549-m03 --format={{.State.Status}}
	I1227 20:15:41.959646  337052 status.go:371] ha-422549-m03 host status = "Stopped" (err=<nil>)
	I1227 20:15:41.959674  337052 status.go:384] host is not running, skipping remaining checks
	I1227 20:15:41.959682  337052 status.go:176] ha-422549-m03 status: &{Name:ha-422549-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:15:41.959702  337052 status.go:174] checking status of ha-422549-m04 ...
	I1227 20:15:41.960035  337052 cli_runner.go:164] Run: docker container inspect ha-422549-m04 --format={{.State.Status}}
	I1227 20:15:41.979802  337052 status.go:371] ha-422549-m04 host status = "Stopped" (err=<nil>)
	I1227 20:15:41.979822  337052 status.go:384] host is not running, skipping remaining checks
	I1227 20:15:41.979829  337052 status.go:176] ha-422549-m04 status: &{Name:ha-422549-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5": ha-422549
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-422549-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-422549-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-422549-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5": ha-422549
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-422549-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-422549-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-422549-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5": ha-422549
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-422549-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-422549-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-422549-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-422549
helpers_test.go:244: (dbg) docker inspect ha-422549:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf",
	        "Created": "2025-12-27T20:03:01.682141141Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 137,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:07:23.280905445Z",
	            "FinishedAt": "2025-12-27T20:15:41.57505881Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/hostname",
	        "HostsPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/hosts",
	        "LogPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf-json.log",
	        "Name": "/ha-422549",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-422549:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-422549",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf",
	                "LowerDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/merged",
	                "UpperDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/diff",
	                "WorkDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-422549",
	                "Source": "/var/lib/docker/volumes/ha-422549/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422549",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422549",
	                "name.minikube.sigs.k8s.io": "ha-422549",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422549": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9521cb9225c5842f69a8435c5cf5485b75f9a8b2c68158742ff27c2be32f5951",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422549",
	                        "53fd780c3df5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-422549 -n ha-422549
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-422549 -n ha-422549: exit status 7 (89.492249ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 7 (may be ok)
helpers_test.go:250: "ha-422549" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (14.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (85.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-422549 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m21.091029901s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5
ha_test.go:568: (dbg) Done: out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5: (1.154330958s)
ha_test.go:573: status says not two control-plane nodes are present: args "out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5": ha-422549
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:576: status says not three hosts are running: args "out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5": ha-422549
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:579: status says not three kubelets are running: args "out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5": ha-422549
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:582: status says not two apiservers are running: args "out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5": ha-422549
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:599: expected 3 nodes Ready status to be True, got 
-- stdout --
	' True
	 True
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-422549
helpers_test.go:244: (dbg) docker inspect ha-422549:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf",
	        "Created": "2025-12-27T20:03:01.682141141Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 337233,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:15:42.462104956Z",
	            "FinishedAt": "2025-12-27T20:15:41.57505881Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/hostname",
	        "HostsPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/hosts",
	        "LogPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf-json.log",
	        "Name": "/ha-422549",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-422549:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-422549",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf",
	                "LowerDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/merged",
	                "UpperDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/diff",
	                "WorkDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-422549",
	                "Source": "/var/lib/docker/volumes/ha-422549/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422549",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422549",
	                "name.minikube.sigs.k8s.io": "ha-422549",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bb71ec3c47b900c0fa3f8d54314b359c784cf244167438faa167df26866a5f2b",
	            "SandboxKey": "/var/run/docker/netns/bb71ec3c47b9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422549": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:de:7f:b9:2b:dc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9521cb9225c5842f69a8435c5cf5485b75f9a8b2c68158742ff27c2be32f5951",
	                    "EndpointID": "8d5c856b7af95de0f10e89f9cba406f7c7feb68311acbe9cee0239ed57d8152d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422549",
	                        "53fd780c3df5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-422549 -n ha-422549
helpers_test.go:253: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-422549 logs -n 25: (2.039973668s)
helpers_test.go:261: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-422549 cp ha-422549-m03:/home/docker/cp-test.txt ha-422549-m04:/home/docker/cp-test_ha-422549-m03_ha-422549-m04.txt               │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test_ha-422549-m03_ha-422549-m04.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp testdata/cp-test.txt ha-422549-m04:/home/docker/cp-test.txt                                                             │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3848759327/001/cp-test_ha-422549-m04.txt │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549:/home/docker/cp-test_ha-422549-m04_ha-422549.txt                       │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549 sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549.txt                                                 │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549-m02:/home/docker/cp-test_ha-422549-m04_ha-422549-m02.txt               │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m02 sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549-m02.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549-m03:/home/docker/cp-test_ha-422549-m04_ha-422549-m03.txt               │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m03 sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549-m03.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ node    │ ha-422549 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ node    │ ha-422549 node start m02 --alsologtostderr -v 5                                                                                      │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ node    │ ha-422549 node list --alsologtostderr -v 5                                                                                           │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │                     │
	│ stop    │ ha-422549 stop --alsologtostderr -v 5                                                                                                │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:07 UTC │
	│ start   │ ha-422549 start --wait true --alsologtostderr -v 5                                                                                   │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:07 UTC │                     │
	│ node    │ ha-422549 node list --alsologtostderr -v 5                                                                                           │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:15 UTC │                     │
	│ node    │ ha-422549 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:15 UTC │                     │
	│ stop    │ ha-422549 stop --alsologtostderr -v 5                                                                                                │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:15 UTC │ 27 Dec 25 20:15 UTC │
	│ start   │ ha-422549 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:15 UTC │ 27 Dec 25 20:17 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:15:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:15:42.161076  337106 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:15:42.161339  337106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:15:42.161371  337106 out.go:374] Setting ErrFile to fd 2...
	I1227 20:15:42.161395  337106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:15:42.161910  337106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:15:42.162549  337106 out.go:368] Setting JSON to false
	I1227 20:15:42.163583  337106 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7095,"bootTime":1766859448,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:15:42.163745  337106 start.go:143] virtualization:  
	I1227 20:15:42.167252  337106 out.go:179] * [ha-422549] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:15:42.171750  337106 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:15:42.172029  337106 notify.go:221] Checking for updates...
	I1227 20:15:42.178183  337106 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:15:42.181404  337106 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:15:42.184507  337106 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:15:42.187835  337106 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:15:42.191251  337106 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:15:42.194951  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:42.195780  337106 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:15:42.234793  337106 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:15:42.234922  337106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:15:42.302450  337106 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 20:15:42.291742685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:15:42.302570  337106 docker.go:319] overlay module found
	I1227 20:15:42.305766  337106 out.go:179] * Using the docker driver based on existing profile
	I1227 20:15:42.308585  337106 start.go:309] selected driver: docker
	I1227 20:15:42.308605  337106 start.go:928] validating driver "docker" against &{Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:15:42.308760  337106 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:15:42.308874  337106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:15:42.372262  337106 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 20:15:42.36286995 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:15:42.372694  337106 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:15:42.372727  337106 cni.go:84] Creating CNI manager for ""
	I1227 20:15:42.372789  337106 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1227 20:15:42.372841  337106 start.go:353] cluster config:
	{Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:15:42.376040  337106 out.go:179] * Starting "ha-422549" primary control-plane node in "ha-422549" cluster
	I1227 20:15:42.378965  337106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:15:42.382020  337106 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:15:42.384910  337106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:15:42.384967  337106 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:15:42.385060  337106 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:15:42.385090  337106 cache.go:65] Caching tarball of preloaded images
	I1227 20:15:42.385178  337106 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:15:42.385188  337106 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:15:42.385327  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:15:42.406731  337106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:15:42.406754  337106 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:15:42.406775  337106 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:15:42.406807  337106 start.go:360] acquireMachinesLock for ha-422549: {Name:mk939e8ee4c2bedc86cc6a99d76298e7b2a26ce2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:15:42.406878  337106 start.go:364] duration metric: took 49.87µs to acquireMachinesLock for "ha-422549"
	I1227 20:15:42.406911  337106 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:15:42.406918  337106 fix.go:54] fixHost starting: 
	I1227 20:15:42.407176  337106 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:15:42.424618  337106 fix.go:112] recreateIfNeeded on ha-422549: state=Stopped err=<nil>
	W1227 20:15:42.424651  337106 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:15:42.429793  337106 out.go:252] * Restarting existing docker container for "ha-422549" ...
	I1227 20:15:42.429887  337106 cli_runner.go:164] Run: docker start ha-422549
	I1227 20:15:42.679169  337106 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:15:42.705015  337106 kic.go:430] container "ha-422549" state is running.
	I1227 20:15:42.705398  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:15:42.726555  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:15:42.726800  337106 machine.go:94] provisionDockerMachine start ...
	I1227 20:15:42.726868  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:42.751689  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:42.752020  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1227 20:15:42.752029  337106 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:15:42.752567  337106 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60238->127.0.0.1:33183: read: connection reset by peer
	I1227 20:15:45.888954  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549
	
	I1227 20:15:45.888987  337106 ubuntu.go:182] provisioning hostname "ha-422549"
	I1227 20:15:45.889052  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:45.906473  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:45.906784  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1227 20:15:45.906800  337106 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549 && echo "ha-422549" | sudo tee /etc/hostname
	I1227 20:15:46.050632  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549
	
	I1227 20:15:46.050726  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:46.069043  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:46.069357  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1227 20:15:46.069378  337106 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:15:46.210430  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:15:46.210454  337106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:15:46.210475  337106 ubuntu.go:190] setting up certificates
	I1227 20:15:46.210485  337106 provision.go:84] configureAuth start
	I1227 20:15:46.210557  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:15:46.227543  337106 provision.go:143] copyHostCerts
	I1227 20:15:46.227593  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:15:46.227625  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:15:46.227646  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:15:46.227726  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:15:46.227825  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:15:46.227847  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:15:46.227858  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:15:46.227890  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:15:46.227942  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:15:46.227963  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:15:46.227975  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:15:46.228004  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:15:46.228059  337106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549 san=[127.0.0.1 192.168.49.2 ha-422549 localhost minikube]
	I1227 20:15:46.477651  337106 provision.go:177] copyRemoteCerts
	I1227 20:15:46.477745  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:15:46.477812  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:46.494398  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:46.592817  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:15:46.592877  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1227 20:15:46.609148  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:15:46.609214  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:15:46.626129  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:15:46.626186  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:15:46.643096  337106 provision.go:87] duration metric: took 432.58782ms to configureAuth
	I1227 20:15:46.643124  337106 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:15:46.643376  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:46.643487  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:46.660667  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:46.661005  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1227 20:15:46.661026  337106 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:15:47.007057  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:15:47.007122  337106 machine.go:97] duration metric: took 4.280312247s to provisionDockerMachine
	I1227 20:15:47.007150  337106 start.go:293] postStartSetup for "ha-422549" (driver="docker")
	I1227 20:15:47.007178  337106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:15:47.007279  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:15:47.007348  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:47.029053  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:47.129052  337106 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:15:47.132168  337106 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:15:47.132192  337106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:15:47.132203  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:15:47.132254  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:15:47.132333  337106 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:15:47.132339  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:15:47.132433  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:15:47.139569  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:15:47.156024  337106 start.go:296] duration metric: took 148.843658ms for postStartSetup
	I1227 20:15:47.156149  337106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:15:47.156211  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:47.173109  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:47.266513  337106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:15:47.270816  337106 fix.go:56] duration metric: took 4.86389233s for fixHost
	I1227 20:15:47.270844  337106 start.go:83] releasing machines lock for "ha-422549", held for 4.863953055s
	I1227 20:15:47.270913  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:15:47.287367  337106 ssh_runner.go:195] Run: cat /version.json
	I1227 20:15:47.287429  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:47.287703  337106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:15:47.287764  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:47.309269  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:47.309529  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:47.405178  337106 ssh_runner.go:195] Run: systemctl --version
	I1227 20:15:47.511199  337106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:15:47.547392  337106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:15:47.551737  337106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:15:47.551827  337106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:15:47.559324  337106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:15:47.559347  337106 start.go:496] detecting cgroup driver to use...
	I1227 20:15:47.559388  337106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:15:47.559434  337106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:15:47.574366  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:15:47.587100  337106 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:15:47.587164  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:15:47.602600  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:15:47.615779  337106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:15:47.738070  337106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:15:47.863690  337106 docker.go:234] disabling docker service ...
	I1227 20:15:47.863793  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:15:47.878841  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:15:47.891780  337106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:15:48.005581  337106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:15:48.146501  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:15:48.159335  337106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:15:48.172971  337106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:15:48.173057  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.182022  337106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:15:48.182123  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.190766  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.199691  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.208613  337106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:15:48.216583  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.225357  337106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.238325  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.247144  337106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:15:48.254972  337106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:15:48.262335  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:15:48.380620  337106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:15:48.551875  337106 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:15:48.551947  337106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:15:48.555685  337106 start.go:574] Will wait 60s for crictl version
	I1227 20:15:48.555757  337106 ssh_runner.go:195] Run: which crictl
	I1227 20:15:48.559221  337106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:15:48.585662  337106 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:15:48.585789  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:15:48.613651  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:15:48.644252  337106 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:15:48.647214  337106 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:15:48.663170  337106 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:15:48.666927  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:15:48.676701  337106 kubeadm.go:884] updating cluster {Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:15:48.676861  337106 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:15:48.676926  337106 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:15:48.713302  337106 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:15:48.713323  337106 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:15:48.713375  337106 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:15:48.738578  337106 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:15:48.738606  337106 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:15:48.738615  337106 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I1227 20:15:48.738716  337106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:15:48.738798  337106 ssh_runner.go:195] Run: crio config
	I1227 20:15:48.806339  337106 cni.go:84] Creating CNI manager for ""
	I1227 20:15:48.806361  337106 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1227 20:15:48.806383  337106 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:15:48.806406  337106 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422549 NodeName:ha-422549 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:15:48.806540  337106 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422549"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:15:48.806566  337106 kube-vip.go:115] generating kube-vip config ...
	I1227 20:15:48.806619  337106 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 20:15:48.818243  337106 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:15:48.818375  337106 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 20:15:48.818447  337106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:15:48.825705  337106 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:15:48.825785  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1227 20:15:48.832852  337106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1227 20:15:48.844713  337106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:15:48.856701  337106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1227 20:15:48.868844  337106 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 20:15:48.880915  337106 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:15:48.884598  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:15:48.893875  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:15:49.019776  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:15:49.036215  337106 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.2
	I1227 20:15:49.036242  337106 certs.go:195] generating shared ca certs ...
	I1227 20:15:49.036258  337106 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:15:49.036390  337106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:15:49.036447  337106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:15:49.036460  337106 certs.go:257] generating profile certs ...
	I1227 20:15:49.036541  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key
	I1227 20:15:49.036611  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.743f7ef3
	I1227 20:15:49.036653  337106 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key
	I1227 20:15:49.036666  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:15:49.036679  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:15:49.036694  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:15:49.036704  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:15:49.036720  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:15:49.036731  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:15:49.036746  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:15:49.036756  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:15:49.036804  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:15:49.036836  337106 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:15:49.036848  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:15:49.036874  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:15:49.036910  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:15:49.036939  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:15:49.037002  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:15:49.037036  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:15:49.037057  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:49.037072  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:15:49.037704  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:15:49.057400  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:15:49.076605  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:15:49.095621  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:15:49.115441  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:15:49.135019  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:15:49.162312  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:15:49.179956  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:15:49.203774  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:15:49.228107  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:15:49.246930  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:15:49.265916  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:15:49.281838  337106 ssh_runner.go:195] Run: openssl version
	I1227 20:15:49.287989  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:49.295912  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:15:49.303435  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:49.307018  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:49.307115  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:49.347922  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:15:49.354929  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:15:49.361715  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:15:49.368688  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:15:49.372719  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:15:49.372798  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:15:49.413917  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:15:49.421060  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:15:49.428016  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:15:49.435273  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:15:49.438964  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:15:49.439075  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:15:49.480693  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:15:49.488361  337106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:15:49.492062  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:15:49.532621  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:15:49.573227  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:15:49.615004  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:15:49.660835  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:15:49.706320  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:15:49.793965  337106 kubeadm.go:401] StartCluster: {Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:15:49.794119  337106 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:15:49.794193  337106 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:15:49.873661  337106 cri.go:96] found id: "acdd287d4087fec2c7c00eb589c13b06231128c1441e2db4a8f74c57600a6e67"
	I1227 20:15:49.873685  337106 cri.go:96] found id: "7c4ac1dbe59ad7d3143dfe74886a6bc3058bfad37ae864b855a6e47c1a4d984e"
	I1227 20:15:49.873690  337106 cri.go:96] found id: "6b0b91d1da0a4c385d0d3110ebc1d18efbc54bab7d6da6bba31c072f2fbd4da9"
	I1227 20:15:49.873694  337106 cri.go:96] found id: "776b31832bd3b44eb905f188f6aa9c0428a287ba7eaeb4ed172dd8bef1b5795b"
	I1227 20:15:49.873697  337106 cri.go:96] found id: "97ce57129ce3bc803fd62d49e1f3f06d06aa64d93e2ef36f372084cbbd21e34a"
	I1227 20:15:49.873717  337106 cri.go:96] found id: ""
	I1227 20:15:49.873771  337106 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:15:49.891661  337106 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:15:49Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:15:49.891749  337106 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:15:49.906600  337106 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:15:49.906624  337106 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:15:49.906703  337106 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:15:49.919028  337106 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:15:49.919479  337106 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-422549" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:15:49.919620  337106 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-272475/kubeconfig needs updating (will repair): [kubeconfig missing "ha-422549" cluster setting kubeconfig missing "ha-422549" context setting]
	I1227 20:15:49.919957  337106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:15:49.920555  337106 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 20:15:49.921302  337106 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 20:15:49.921327  337106 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 20:15:49.921333  337106 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 20:15:49.921364  337106 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1227 20:15:49.921405  337106 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 20:15:49.921411  337106 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 20:15:49.921423  337106 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 20:15:49.921745  337106 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:15:49.936013  337106 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1227 20:15:49.936040  337106 kubeadm.go:602] duration metric: took 29.409884ms to restartPrimaryControlPlane
	I1227 20:15:49.936051  337106 kubeadm.go:403] duration metric: took 142.110676ms to StartCluster
	I1227 20:15:49.936075  337106 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:15:49.936142  337106 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:15:49.937228  337106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:15:49.937930  337106 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:15:49.938100  337106 start.go:242] waiting for startup goroutines ...
	I1227 20:15:49.938130  337106 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:15:49.939423  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:49.942218  337106 out.go:179] * Enabled addons: 
	I1227 20:15:49.945329  337106 addons.go:530] duration metric: took 7.202537ms for enable addons: enabled=[]
	I1227 20:15:49.945417  337106 start.go:247] waiting for cluster config update ...
	I1227 20:15:49.945442  337106 start.go:256] writing updated cluster config ...
	I1227 20:15:49.948818  337106 out.go:203] 
	I1227 20:15:49.952226  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:49.952424  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:15:49.955848  337106 out.go:179] * Starting "ha-422549-m02" control-plane node in "ha-422549" cluster
	I1227 20:15:49.958975  337106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:15:49.962204  337106 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:15:49.965179  337106 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:15:49.965273  337106 cache.go:65] Caching tarball of preloaded images
	I1227 20:15:49.965249  337106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:15:49.965709  337106 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:15:49.965749  337106 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:15:49.965939  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:15:49.990566  337106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:15:49.990585  337106 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:15:49.990599  337106 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:15:49.990629  337106 start.go:360] acquireMachinesLock for ha-422549-m02: {Name:mk8fc7aa5d6c41749cc4b9db094e3fd243d8b868 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:15:49.990677  337106 start.go:364] duration metric: took 33.255µs to acquireMachinesLock for "ha-422549-m02"
	I1227 20:15:49.990697  337106 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:15:49.990704  337106 fix.go:54] fixHost starting: m02
	I1227 20:15:49.990960  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m02 --format={{.State.Status}}
	I1227 20:15:50.012661  337106 fix.go:112] recreateIfNeeded on ha-422549-m02: state=Stopped err=<nil>
	W1227 20:15:50.012689  337106 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:15:50.016334  337106 out.go:252] * Restarting existing docker container for "ha-422549-m02" ...
	I1227 20:15:50.016437  337106 cli_runner.go:164] Run: docker start ha-422549-m02
	I1227 20:15:50.398628  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m02 --format={{.State.Status}}
	I1227 20:15:50.427580  337106 kic.go:430] container "ha-422549-m02" state is running.
	I1227 20:15:50.427943  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:15:50.459424  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:15:50.459657  337106 machine.go:94] provisionDockerMachine start ...
	I1227 20:15:50.459714  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:50.490531  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:50.493631  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1227 20:15:50.493650  337106 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:15:50.494339  337106 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 20:15:53.641274  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m02
	
	I1227 20:15:53.641349  337106 ubuntu.go:182] provisioning hostname "ha-422549-m02"
	I1227 20:15:53.641467  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:53.663080  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:53.663387  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1227 20:15:53.663406  337106 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549-m02 && echo "ha-422549-m02" | sudo tee /etc/hostname
	I1227 20:15:53.819054  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m02
	
	I1227 20:15:53.819139  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:53.847197  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:53.847500  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1227 20:15:53.847516  337106 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:15:53.989824  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:15:53.989849  337106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:15:53.989866  337106 ubuntu.go:190] setting up certificates
	I1227 20:15:53.989878  337106 provision.go:84] configureAuth start
	I1227 20:15:53.989941  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:15:54.009870  337106 provision.go:143] copyHostCerts
	I1227 20:15:54.009915  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:15:54.009950  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:15:54.009964  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:15:54.010041  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:15:54.010125  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:15:54.010148  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:15:54.010153  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:15:54.010182  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:15:54.010267  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:15:54.010289  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:15:54.010297  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:15:54.010323  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:15:54.010374  337106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549-m02 san=[127.0.0.1 192.168.49.3 ha-422549-m02 localhost minikube]
	I1227 20:15:54.260286  337106 provision.go:177] copyRemoteCerts
	I1227 20:15:54.260405  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:15:54.260467  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:54.278663  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:15:54.377066  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:15:54.377172  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:15:54.395067  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:15:54.395180  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 20:15:54.412398  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:15:54.412507  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:15:54.429091  337106 provision.go:87] duration metric: took 439.199295ms to configureAuth
	I1227 20:15:54.429119  337106 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:15:54.429346  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:54.429480  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:54.446402  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:54.446712  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1227 20:15:54.446736  337106 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:15:54.817328  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:15:54.817351  337106 machine.go:97] duration metric: took 4.357685623s to provisionDockerMachine
	I1227 20:15:54.817363  337106 start.go:293] postStartSetup for "ha-422549-m02" (driver="docker")
	I1227 20:15:54.817373  337106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:15:54.817438  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:15:54.817558  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:54.834291  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:15:54.933155  337106 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:15:54.936441  337106 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:15:54.936469  337106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:15:54.936480  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:15:54.936536  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:15:54.936618  337106 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:15:54.936632  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:15:54.936739  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:15:54.944112  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:15:54.961353  337106 start.go:296] duration metric: took 143.973459ms for postStartSetup
	I1227 20:15:54.961439  337106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:15:54.961529  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:54.978679  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:15:55.075001  337106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:15:55.080166  337106 fix.go:56] duration metric: took 5.089454661s for fixHost
	I1227 20:15:55.080193  337106 start.go:83] releasing machines lock for "ha-422549-m02", held for 5.089507139s
	I1227 20:15:55.080267  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:15:55.100982  337106 out.go:179] * Found network options:
	I1227 20:15:55.103953  337106 out.go:179]   - NO_PROXY=192.168.49.2
	W1227 20:15:55.106802  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:15:55.106845  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 20:15:55.106919  337106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:15:55.106964  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:55.107011  337106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:15:55.107066  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:55.130151  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:15:55.137687  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:15:55.324223  337106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:15:55.328436  337106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:15:55.328502  337106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:15:55.336088  337106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:15:55.336120  337106 start.go:496] detecting cgroup driver to use...
	I1227 20:15:55.336165  337106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:15:55.336216  337106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:15:55.350639  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:15:55.363702  337106 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:15:55.363812  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:15:55.380023  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:15:55.396017  337106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:15:55.627299  337106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:15:55.867067  337106 docker.go:234] disabling docker service ...
	I1227 20:15:55.867179  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:15:55.887006  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:15:55.903434  337106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:15:56.147368  337106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:15:56.372701  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:15:56.386071  337106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:15:56.438830  337106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:15:56.438945  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.453154  337106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:15:56.453272  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.469839  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.480255  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.492229  337106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:15:56.504717  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.522023  337106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.536543  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.549900  337106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:15:56.562631  337106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:15:56.570307  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:15:56.790142  337106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:15:57.038862  337106 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:15:57.038970  337106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:15:57.042575  337106 start.go:574] Will wait 60s for crictl version
	I1227 20:15:57.042675  337106 ssh_runner.go:195] Run: which crictl
	I1227 20:15:57.046123  337106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:15:57.079472  337106 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:15:57.079604  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:15:57.111539  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:15:57.144245  337106 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:15:57.147176  337106 out.go:179]   - env NO_PROXY=192.168.49.2
	I1227 20:15:57.150339  337106 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:15:57.166874  337106 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:15:57.170704  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:15:57.180393  337106 mustload.go:66] Loading cluster: ha-422549
	I1227 20:15:57.180638  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:57.180911  337106 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:15:57.198058  337106 host.go:66] Checking if "ha-422549" exists ...
	I1227 20:15:57.198339  337106 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.3
	I1227 20:15:57.198353  337106 certs.go:195] generating shared ca certs ...
	I1227 20:15:57.198367  337106 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:15:57.198490  337106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:15:57.198538  337106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:15:57.198549  337106 certs.go:257] generating profile certs ...
	I1227 20:15:57.198625  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key
	I1227 20:15:57.198688  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.982843aa
	I1227 20:15:57.198735  337106 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key
	I1227 20:15:57.198748  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:15:57.198762  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:15:57.198779  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:15:57.198791  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:15:57.198810  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:15:57.198822  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:15:57.198837  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:15:57.198847  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:15:57.198901  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:15:57.198935  337106 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:15:57.198948  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:15:57.198974  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:15:57.199001  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:15:57.199031  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:15:57.199079  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:15:57.199116  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:15:57.199131  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:15:57.199146  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:57.199227  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:57.217178  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:57.309803  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1227 20:15:57.313760  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1227 20:15:57.321367  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1227 20:15:57.324564  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1227 20:15:57.332196  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1227 20:15:57.335588  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1227 20:15:57.343125  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1227 20:15:57.346654  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1227 20:15:57.354254  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1227 20:15:57.357588  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1227 20:15:57.365565  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1227 20:15:57.369083  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1227 20:15:57.377616  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:15:57.394501  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:15:57.411297  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:15:57.428988  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:15:57.454933  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:15:57.477949  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:15:57.503718  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:15:57.527644  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:15:57.546021  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:15:57.562799  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:15:57.579794  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:15:57.596739  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1227 20:15:57.608968  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1227 20:15:57.621234  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1227 20:15:57.633283  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1227 20:15:57.645247  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1227 20:15:57.656994  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1227 20:15:57.668811  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (728 bytes)
	I1227 20:15:57.680824  337106 ssh_runner.go:195] Run: openssl version
	I1227 20:15:57.687264  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:15:57.694487  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:15:57.701580  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:15:57.705288  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:15:57.705345  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:15:57.746792  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:15:57.754009  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:15:57.760822  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:15:57.767703  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:15:57.771201  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:15:57.771305  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:15:57.813599  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:15:57.821036  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:57.828245  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:15:57.835688  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:57.839528  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:57.839640  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:57.880298  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:15:57.887708  337106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:15:57.891264  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:15:57.931649  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:15:57.972880  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:15:58.015739  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:15:58.057920  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:15:58.099308  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:15:58.140147  337106 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.35.0 crio true true} ...
	I1227 20:15:58.140265  337106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:15:58.140313  337106 kube-vip.go:115] generating kube-vip config ...
	I1227 20:15:58.140373  337106 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 20:15:58.151945  337106 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:15:58.152003  337106 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 20:15:58.152075  337106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:15:58.159193  337106 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:15:58.159305  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1227 20:15:58.166464  337106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 20:15:58.178769  337106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:15:58.190381  337106 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 20:15:58.202642  337106 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:15:58.206198  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:15:58.215567  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:15:58.331455  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:15:58.345573  337106 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:15:58.345907  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:58.350455  337106 out.go:179] * Verifying Kubernetes components...
	I1227 20:15:58.353287  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:15:58.476026  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:15:58.491956  337106 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1227 20:15:58.492036  337106 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1227 20:15:58.492360  337106 node_ready.go:35] waiting up to 6m0s for node "ha-422549-m02" to be "Ready" ...
	W1227 20:16:08.493659  337106 node_ready.go:55] error getting node "ha-422549-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422549-m02": net/http: TLS handshake timeout
	W1227 20:16:13.724508  337106 node_ready.go:57] node "ha-422549-m02" has "Ready":"Unknown" status (will retry)
	I1227 20:16:13.998074  337106 node_ready.go:49] node "ha-422549-m02" is "Ready"
	I1227 20:16:13.998104  337106 node_ready.go:38] duration metric: took 15.505718327s for node "ha-422549-m02" to be "Ready" ...
	I1227 20:16:13.998117  337106 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:16:13.998195  337106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:16:14.018969  337106 api_server.go:72] duration metric: took 15.673348785s to wait for apiserver process to appear ...
	I1227 20:16:14.019000  337106 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:16:14.019022  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:14.028770  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:16:14.028803  337106 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:16:14.519178  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:14.550966  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:16:14.551052  337106 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:16:15.019197  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:15.046385  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:16:15.046479  337106 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:16:15.519851  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:15.557956  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:16:15.558047  337106 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:16:16.019247  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:16.033187  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:16:16.033267  337106 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:16:16.519670  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:16.536800  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1227 20:16:16.539603  337106 api_server.go:141] control plane version: v1.35.0
	I1227 20:16:16.539669  337106 api_server.go:131] duration metric: took 2.52066052s to wait for apiserver health ...
	I1227 20:16:16.539693  337106 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:16:16.570231  337106 system_pods.go:59] 26 kube-system pods found
	I1227 20:16:16.570324  337106 system_pods.go:61] "coredns-7d764666f9-mf5xw" [5a7f58c2-f991-46f0-9ece-9a561d53d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:16.570350  337106 system_pods.go:61] "coredns-7d764666f9-n5d9d" [159febfd-c1e4-4897-a372-59e4a3069914] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:16.570386  337106 system_pods.go:61] "etcd-ha-422549" [8f26f563-e734-4add-aefe-484f0e873a1e] Running
	I1227 20:16:16.570414  337106 system_pods.go:61] "etcd-ha-422549-m02" [5fed7e48-07c4-4a07-b63b-0fccbd196f6f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:16:16.570435  337106 system_pods.go:61] "etcd-ha-422549-m03" [d22f78a1-2f4c-41e6-b65a-bf7108686c71] Running
	I1227 20:16:16.570460  337106 system_pods.go:61] "kindnet-28svl" [1494f795-941f-418e-8090-098225eb9c6a] Running
	I1227 20:16:16.570493  337106 system_pods.go:61] "kindnet-4hl7v" [ea2cc8a1-df16-440c-a093-a5d915b249b4] Running
	I1227 20:16:16.570521  337106 system_pods.go:61] "kindnet-5wczs" [df3d7298-4140-464f-a6e8-c614e1683488] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:16:16.570663  337106 system_pods.go:61] "kindnet-qkqmv" [66d834ae-af1b-456d-ae48-8a0d6608f961] Running
	I1227 20:16:16.570696  337106 system_pods.go:61] "kube-apiserver-ha-422549" [14f8e794-2ba7-477d-806b-03dd5a33d868] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:16:16.570721  337106 system_pods.go:61] "kube-apiserver-ha-422549-m02" [a4b97cc6-26ef-4d46-9ef9-bdee08eb89d6] Running
	I1227 20:16:16.570746  337106 system_pods.go:61] "kube-apiserver-ha-422549-m03" [71f23288-3e33-4bc8-9182-08c190ae026f] Running
	I1227 20:16:16.570787  337106 system_pods.go:61] "kube-controller-manager-ha-422549" [b69af60f-4eac-4e85-aa81-66b7616a46f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:16.570820  337106 system_pods.go:61] "kube-controller-manager-ha-422549-m02" [07c0e68f-76e5-4cee-92a2-05dd2fb4c3e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:16.570843  337106 system_pods.go:61] "kube-controller-manager-ha-422549-m03" [af291694-2986-455c-8588-c2879d10ff3b] Running
	I1227 20:16:16.570865  337106 system_pods.go:61] "kube-proxy-cg4z5" [42f74e61-eb67-4d02-8f08-f77f7163f5fc] Running
	I1227 20:16:16.570897  337106 system_pods.go:61] "kube-proxy-kscg6" [baa716d5-546a-4922-ba51-fe1116e36c75] Running
	I1227 20:16:16.570923  337106 system_pods.go:61] "kube-proxy-mhmmn" [d69029af-1fc4-4a31-913e-92e1231e845a] Running
	I1227 20:16:16.570948  337106 system_pods.go:61] "kube-proxy-nqr7h" [d0fc3ef5-765a-4376-94e6-42237908d3fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:16:16.570969  337106 system_pods.go:61] "kube-scheduler-ha-422549" [549e105d-d2e7-42b6-ae48-098d590e7b1d] Running
	I1227 20:16:16.571002  337106 system_pods.go:61] "kube-scheduler-ha-422549-m02" [db9187da-87a8-4b73-baea-76f3d9ef35c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:16:16.571026  337106 system_pods.go:61] "kube-scheduler-ha-422549-m03" [2a6b70b3-5303-404f-8b1d-1a65b9b81555] Running
	I1227 20:16:16.571044  337106 system_pods.go:61] "kube-vip-ha-422549" [32d647ce-90ed-4f56-b4c8-7ed445019d88] Running
	I1227 20:16:16.571067  337106 system_pods.go:61] "kube-vip-ha-422549-m02" [ddde9374-24b7-498d-b829-6902c612b272] Running
	I1227 20:16:16.571109  337106 system_pods.go:61] "kube-vip-ha-422549-m03" [39a60c56-1bf0-4232-9af0-f55e0c66a33d] Running
	I1227 20:16:16.571136  337106 system_pods.go:61] "storage-provisioner" [0d645eab-223f-4dd6-9518-6ab4a21d4c09] Running
	I1227 20:16:16.571156  337106 system_pods.go:74] duration metric: took 31.434553ms to wait for pod list to return data ...
	I1227 20:16:16.571179  337106 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:16:16.590199  337106 default_sa.go:45] found service account: "default"
	I1227 20:16:16.590265  337106 default_sa.go:55] duration metric: took 19.064027ms for default service account to be created ...
	I1227 20:16:16.590290  337106 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:16:16.623079  337106 system_pods.go:86] 26 kube-system pods found
	I1227 20:16:16.623169  337106 system_pods.go:89] "coredns-7d764666f9-mf5xw" [5a7f58c2-f991-46f0-9ece-9a561d53d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:16.623195  337106 system_pods.go:89] "coredns-7d764666f9-n5d9d" [159febfd-c1e4-4897-a372-59e4a3069914] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:16.623234  337106 system_pods.go:89] "etcd-ha-422549" [8f26f563-e734-4add-aefe-484f0e873a1e] Running
	I1227 20:16:16.623263  337106 system_pods.go:89] "etcd-ha-422549-m02" [5fed7e48-07c4-4a07-b63b-0fccbd196f6f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:16:16.623283  337106 system_pods.go:89] "etcd-ha-422549-m03" [d22f78a1-2f4c-41e6-b65a-bf7108686c71] Running
	I1227 20:16:16.623303  337106 system_pods.go:89] "kindnet-28svl" [1494f795-941f-418e-8090-098225eb9c6a] Running
	I1227 20:16:16.623335  337106 system_pods.go:89] "kindnet-4hl7v" [ea2cc8a1-df16-440c-a093-a5d915b249b4] Running
	I1227 20:16:16.623362  337106 system_pods.go:89] "kindnet-5wczs" [df3d7298-4140-464f-a6e8-c614e1683488] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:16:16.623385  337106 system_pods.go:89] "kindnet-qkqmv" [66d834ae-af1b-456d-ae48-8a0d6608f961] Running
	I1227 20:16:16.623411  337106 system_pods.go:89] "kube-apiserver-ha-422549" [14f8e794-2ba7-477d-806b-03dd5a33d868] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:16:16.623447  337106 system_pods.go:89] "kube-apiserver-ha-422549-m02" [a4b97cc6-26ef-4d46-9ef9-bdee08eb89d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:16:16.623475  337106 system_pods.go:89] "kube-apiserver-ha-422549-m03" [71f23288-3e33-4bc8-9182-08c190ae026f] Running
	I1227 20:16:16.623501  337106 system_pods.go:89] "kube-controller-manager-ha-422549" [b69af60f-4eac-4e85-aa81-66b7616a46f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:16.623525  337106 system_pods.go:89] "kube-controller-manager-ha-422549-m02" [07c0e68f-76e5-4cee-92a2-05dd2fb4c3e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:16.623557  337106 system_pods.go:89] "kube-controller-manager-ha-422549-m03" [af291694-2986-455c-8588-c2879d10ff3b] Running
	I1227 20:16:16.623583  337106 system_pods.go:89] "kube-proxy-cg4z5" [42f74e61-eb67-4d02-8f08-f77f7163f5fc] Running
	I1227 20:16:16.623607  337106 system_pods.go:89] "kube-proxy-kscg6" [baa716d5-546a-4922-ba51-fe1116e36c75] Running
	I1227 20:16:16.623632  337106 system_pods.go:89] "kube-proxy-mhmmn" [d69029af-1fc4-4a31-913e-92e1231e845a] Running
	I1227 20:16:16.623664  337106 system_pods.go:89] "kube-proxy-nqr7h" [d0fc3ef5-765a-4376-94e6-42237908d3fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:16:16.623690  337106 system_pods.go:89] "kube-scheduler-ha-422549" [549e105d-d2e7-42b6-ae48-098d590e7b1d] Running
	I1227 20:16:16.623713  337106 system_pods.go:89] "kube-scheduler-ha-422549-m02" [db9187da-87a8-4b73-baea-76f3d9ef35c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:16:16.623737  337106 system_pods.go:89] "kube-scheduler-ha-422549-m03" [2a6b70b3-5303-404f-8b1d-1a65b9b81555] Running
	I1227 20:16:16.623769  337106 system_pods.go:89] "kube-vip-ha-422549" [32d647ce-90ed-4f56-b4c8-7ed445019d88] Running
	I1227 20:16:16.623794  337106 system_pods.go:89] "kube-vip-ha-422549-m02" [ddde9374-24b7-498d-b829-6902c612b272] Running
	I1227 20:16:16.623818  337106 system_pods.go:89] "kube-vip-ha-422549-m03" [39a60c56-1bf0-4232-9af0-f55e0c66a33d] Running
	I1227 20:16:16.623842  337106 system_pods.go:89] "storage-provisioner" [0d645eab-223f-4dd6-9518-6ab4a21d4c09] Running
	I1227 20:16:16.623877  337106 system_pods.go:126] duration metric: took 33.567641ms to wait for k8s-apps to be running ...
	I1227 20:16:16.623905  337106 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:16:16.623994  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:16:16.670311  337106 system_svc.go:56] duration metric: took 46.39668ms WaitForService to wait for kubelet
	I1227 20:16:16.670384  337106 kubeadm.go:587] duration metric: took 18.324769156s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:16:16.670417  337106 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:16:16.708894  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:16.708992  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:16.709018  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:16.709039  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:16.709068  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:16.709094  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:16.709113  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:16.709132  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:16.709151  337106 node_conditions.go:105] duration metric: took 38.715442ms to run NodePressure ...
	I1227 20:16:16.709184  337106 start.go:242] waiting for startup goroutines ...
	I1227 20:16:16.709228  337106 start.go:256] writing updated cluster config ...
	I1227 20:16:16.713916  337106 out.go:203] 
	I1227 20:16:16.723292  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:16.723425  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:16.727142  337106 out.go:179] * Starting "ha-422549-m03" control-plane node in "ha-422549" cluster
	I1227 20:16:16.732478  337106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:16:16.735844  337106 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:16:16.739409  337106 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:16:16.739458  337106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:16:16.739659  337106 cache.go:65] Caching tarball of preloaded images
	I1227 20:16:16.739753  337106 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:16:16.739768  337106 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:16:16.739908  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:16.767918  337106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:16:16.767942  337106 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:16:16.767957  337106 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:16:16.767980  337106 start.go:360] acquireMachinesLock for ha-422549-m03: {Name:mkf062d56fcf026ae5cb73bd2d2d3016f0f6c481 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:16:16.768043  337106 start.go:364] duration metric: took 41.697µs to acquireMachinesLock for "ha-422549-m03"
	I1227 20:16:16.768068  337106 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:16:16.768074  337106 fix.go:54] fixHost starting: m03
	I1227 20:16:16.768352  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m03 --format={{.State.Status}}
	I1227 20:16:16.790621  337106 fix.go:112] recreateIfNeeded on ha-422549-m03: state=Stopped err=<nil>
	W1227 20:16:16.790653  337106 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:16:16.794891  337106 out.go:252] * Restarting existing docker container for "ha-422549-m03" ...
	I1227 20:16:16.794974  337106 cli_runner.go:164] Run: docker start ha-422549-m03
	I1227 20:16:17.149956  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m03 --format={{.State.Status}}
	I1227 20:16:17.174958  337106 kic.go:430] container "ha-422549-m03" state is running.
	I1227 20:16:17.175307  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m03
	I1227 20:16:17.213633  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:17.213863  337106 machine.go:94] provisionDockerMachine start ...
	I1227 20:16:17.213929  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:17.241742  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:17.242041  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1227 20:16:17.242056  337106 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:16:17.242635  337106 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 20:16:20.405227  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m03
	
	I1227 20:16:20.405265  337106 ubuntu.go:182] provisioning hostname "ha-422549-m03"
	I1227 20:16:20.405335  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:20.447382  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:20.447685  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1227 20:16:20.447702  337106 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549-m03 && echo "ha-422549-m03" | sudo tee /etc/hostname
	I1227 20:16:20.641581  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m03
	
	I1227 20:16:20.641669  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:20.671096  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:20.671417  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1227 20:16:20.671491  337106 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:16:20.825909  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:16:20.825934  337106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:16:20.825963  337106 ubuntu.go:190] setting up certificates
	I1227 20:16:20.825973  337106 provision.go:84] configureAuth start
	I1227 20:16:20.826043  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m03
	I1227 20:16:20.848683  337106 provision.go:143] copyHostCerts
	I1227 20:16:20.848722  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:16:20.848751  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:16:20.848757  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:16:20.848829  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:16:20.848936  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:16:20.848954  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:16:20.848959  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:16:20.848987  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:16:20.849035  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:16:20.849051  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:16:20.849055  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:16:20.849079  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:16:20.849139  337106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549-m03 san=[127.0.0.1 192.168.49.4 ha-422549-m03 localhost minikube]
	I1227 20:16:20.958713  337106 provision.go:177] copyRemoteCerts
	I1227 20:16:20.958777  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:16:20.958919  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:20.978456  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m03/id_rsa Username:docker}
	I1227 20:16:21.097778  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:16:21.097855  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:16:21.118223  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:16:21.118280  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 20:16:21.171526  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:16:21.171643  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:16:21.238272  337106 provision.go:87] duration metric: took 412.285774ms to configureAuth
	I1227 20:16:21.238317  337106 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:16:21.238586  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:21.238711  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:21.261112  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:21.261428  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1227 20:16:21.261479  337106 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:16:22.736503  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:16:22.736545  337106 machine.go:97] duration metric: took 5.522665605s to provisionDockerMachine
	I1227 20:16:22.736559  337106 start.go:293] postStartSetup for "ha-422549-m03" (driver="docker")
	I1227 20:16:22.736569  337106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:16:22.736631  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:16:22.736681  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:22.757560  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m03/id_rsa Username:docker}
	I1227 20:16:22.872943  337106 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:16:22.877107  337106 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:16:22.877150  337106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:16:22.877162  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:16:22.877224  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:16:22.877310  337106 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:16:22.877323  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:16:22.877568  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:16:22.887508  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:16:22.935543  337106 start.go:296] duration metric: took 198.968452ms for postStartSetup
	I1227 20:16:22.935675  337106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:16:22.935751  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:22.962394  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m03/id_rsa Username:docker}
	I1227 20:16:23.086315  337106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:16:23.098060  337106 fix.go:56] duration metric: took 6.329978316s for fixHost
	I1227 20:16:23.098095  337106 start.go:83] releasing machines lock for "ha-422549-m03", held for 6.330038441s
	I1227 20:16:23.098169  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m03
	I1227 20:16:23.127385  337106 out.go:179] * Found network options:
	I1227 20:16:23.130521  337106 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1227 20:16:23.133556  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:23.133603  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:23.133636  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:23.133648  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 20:16:23.133723  337106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:16:23.133754  337106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:16:23.133766  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:23.133843  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:23.174788  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m03/id_rsa Username:docker}
	I1227 20:16:23.176337  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m03/id_rsa Username:docker}
	I1227 20:16:23.532310  337106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:16:23.539423  337106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:16:23.539508  337106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:16:23.547781  337106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:16:23.547805  337106 start.go:496] detecting cgroup driver to use...
	I1227 20:16:23.547836  337106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:16:23.547889  337106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:16:23.564242  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:16:23.579653  337106 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:16:23.579767  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:16:23.598176  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:16:23.613182  337106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:16:23.877595  337106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:16:24.169571  337106 docker.go:234] disabling docker service ...
	I1227 20:16:24.169685  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:16:24.197205  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:16:24.211488  337106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:16:24.466324  337106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:16:24.716660  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:16:24.734029  337106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:16:24.758554  337106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:16:24.758647  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.777034  337106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:16:24.777106  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.791147  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.805710  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.818822  337106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:16:24.828018  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.843848  337106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.852557  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.865822  337106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:16:24.881844  337106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:16:24.890467  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:25.116336  337106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:16:26.436202  337106 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.319834137s)
	I1227 20:16:26.436227  337106 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:16:26.436285  337106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:16:26.440409  337106 start.go:574] Will wait 60s for crictl version
	I1227 20:16:26.440474  337106 ssh_runner.go:195] Run: which crictl
	I1227 20:16:26.444800  337106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:16:26.475048  337106 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:16:26.475137  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:16:26.509827  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:16:26.549254  337106 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:16:26.552189  337106 out.go:179]   - env NO_PROXY=192.168.49.2
	I1227 20:16:26.555166  337106 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1227 20:16:26.558176  337106 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:16:26.575734  337106 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:16:26.580184  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:16:26.590410  337106 mustload.go:66] Loading cluster: ha-422549
	I1227 20:16:26.590667  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:26.590918  337106 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:16:26.608326  337106 host.go:66] Checking if "ha-422549" exists ...
	I1227 20:16:26.608672  337106 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.4
	I1227 20:16:26.608684  337106 certs.go:195] generating shared ca certs ...
	I1227 20:16:26.608708  337106 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:16:26.608822  337106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:16:26.608870  337106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:16:26.608877  337106 certs.go:257] generating profile certs ...
	I1227 20:16:26.608966  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key
	I1227 20:16:26.609032  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.d8cf7377
	I1227 20:16:26.609078  337106 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key
	I1227 20:16:26.609087  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:16:26.609099  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:16:26.609109  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:16:26.609121  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:16:26.609131  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:16:26.609142  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:16:26.609153  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:16:26.609163  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:16:26.609238  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:16:26.609270  337106 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:16:26.609278  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:16:26.609540  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:16:26.609594  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:16:26.609622  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:16:26.609673  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:16:26.609705  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:26.609718  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:16:26.609729  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:16:26.609784  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:16:26.627281  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:16:26.717750  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1227 20:16:26.722194  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1227 20:16:26.732379  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1227 20:16:26.736107  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1227 20:16:26.744795  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1227 20:16:26.748608  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1227 20:16:26.757298  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1227 20:16:26.760963  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1227 20:16:26.770282  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1227 20:16:26.774405  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1227 20:16:26.782912  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1227 20:16:26.787280  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1227 20:16:26.796054  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:16:26.815746  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:16:26.833735  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:16:26.852956  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:16:26.873558  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:16:26.893781  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:16:26.912114  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:16:26.930067  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:16:26.954144  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:16:26.992095  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:16:27.032398  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:16:27.058957  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1227 20:16:27.082646  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1227 20:16:27.099055  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1227 20:16:27.114942  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1227 20:16:27.128524  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1227 20:16:27.143949  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1227 20:16:27.166895  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (728 bytes)
	I1227 20:16:27.189731  337106 ssh_runner.go:195] Run: openssl version
	I1227 20:16:27.199330  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:27.207176  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:16:27.215001  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:27.218816  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:27.218944  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:27.262656  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:16:27.270122  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:16:27.278066  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:16:27.286224  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:16:27.290216  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:16:27.290299  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:16:27.331583  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:16:27.339149  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:16:27.347443  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:16:27.354941  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:16:27.358541  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:16:27.358644  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:16:27.401369  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:16:27.408555  337106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:16:27.412327  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:16:27.452918  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:16:27.493668  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:16:27.534423  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:16:27.575645  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:16:27.617601  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:16:27.658239  337106 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.35.0 crio true true} ...
	I1227 20:16:27.658389  337106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:16:27.658424  337106 kube-vip.go:115] generating kube-vip config ...
	I1227 20:16:27.658480  337106 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 20:16:27.670482  337106 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:16:27.670542  337106 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 20:16:27.670611  337106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:16:27.678382  337106 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:16:27.678493  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1227 20:16:27.688057  337106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 20:16:27.702120  337106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:16:27.721182  337106 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 20:16:27.736629  337106 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:16:27.740129  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:16:27.750576  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:27.920085  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:16:27.936290  337106 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:16:27.936639  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:27.941595  337106 out.go:179] * Verifying Kubernetes components...
	I1227 20:16:27.944502  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:28.098929  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:16:28.115947  337106 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1227 20:16:28.116063  337106 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1227 20:16:28.116301  337106 node_ready.go:35] waiting up to 6m0s for node "ha-422549-m03" to be "Ready" ...
	W1227 20:16:30.121347  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	W1227 20:16:32.620007  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	W1227 20:16:34.620221  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	W1227 20:16:36.620631  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	W1227 20:16:38.620914  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	W1227 20:16:41.119914  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	I1227 20:16:42.138199  337106 node_ready.go:49] node "ha-422549-m03" is "Ready"
	I1227 20:16:42.138234  337106 node_ready.go:38] duration metric: took 14.021894093s for node "ha-422549-m03" to be "Ready" ...
	I1227 20:16:42.138250  337106 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:16:42.138320  337106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:16:42.201875  337106 api_server.go:72] duration metric: took 14.265538166s to wait for apiserver process to appear ...
	I1227 20:16:42.201905  337106 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:16:42.201928  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:42.211305  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1227 20:16:42.217811  337106 api_server.go:141] control plane version: v1.35.0
	I1227 20:16:42.217842  337106 api_server.go:131] duration metric: took 15.928834ms to wait for apiserver health ...
	I1227 20:16:42.217852  337106 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:16:42.235518  337106 system_pods.go:59] 26 kube-system pods found
	I1227 20:16:42.235637  337106 system_pods.go:61] "coredns-7d764666f9-mf5xw" [5a7f58c2-f991-46f0-9ece-9a561d53d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:42.235688  337106 system_pods.go:61] "coredns-7d764666f9-n5d9d" [159febfd-c1e4-4897-a372-59e4a3069914] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:42.235725  337106 system_pods.go:61] "etcd-ha-422549" [8f26f563-e734-4add-aefe-484f0e873a1e] Running
	I1227 20:16:42.235747  337106 system_pods.go:61] "etcd-ha-422549-m02" [5fed7e48-07c4-4a07-b63b-0fccbd196f6f] Running
	I1227 20:16:42.235772  337106 system_pods.go:61] "etcd-ha-422549-m03" [d22f78a1-2f4c-41e6-b65a-bf7108686c71] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:16:42.235810  337106 system_pods.go:61] "kindnet-28svl" [1494f795-941f-418e-8090-098225eb9c6a] Running
	I1227 20:16:42.235843  337106 system_pods.go:61] "kindnet-4hl7v" [ea2cc8a1-df16-440c-a093-a5d915b249b4] Running
	I1227 20:16:42.235869  337106 system_pods.go:61] "kindnet-5wczs" [df3d7298-4140-464f-a6e8-c614e1683488] Running
	I1227 20:16:42.235899  337106 system_pods.go:61] "kindnet-qkqmv" [66d834ae-af1b-456d-ae48-8a0d6608f961] Running
	I1227 20:16:42.235929  337106 system_pods.go:61] "kube-apiserver-ha-422549" [14f8e794-2ba7-477d-806b-03dd5a33d868] Running
	I1227 20:16:42.235961  337106 system_pods.go:61] "kube-apiserver-ha-422549-m02" [a4b97cc6-26ef-4d46-9ef9-bdee08eb89d6] Running
	I1227 20:16:42.235997  337106 system_pods.go:61] "kube-apiserver-ha-422549-m03" [71f23288-3e33-4bc8-9182-08c190ae026f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:16:42.236045  337106 system_pods.go:61] "kube-controller-manager-ha-422549" [b69af60f-4eac-4e85-aa81-66b7616a46f6] Running
	I1227 20:16:42.236083  337106 system_pods.go:61] "kube-controller-manager-ha-422549-m02" [07c0e68f-76e5-4cee-92a2-05dd2fb4c3e2] Running
	I1227 20:16:42.236112  337106 system_pods.go:61] "kube-controller-manager-ha-422549-m03" [af291694-2986-455c-8588-c2879d10ff3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:42.236140  337106 system_pods.go:61] "kube-proxy-cg4z5" [42f74e61-eb67-4d02-8f08-f77f7163f5fc] Running
	I1227 20:16:42.236179  337106 system_pods.go:61] "kube-proxy-kscg6" [baa716d5-546a-4922-ba51-fe1116e36c75] Running
	I1227 20:16:42.236206  337106 system_pods.go:61] "kube-proxy-mhmmn" [d69029af-1fc4-4a31-913e-92e1231e845a] Running
	I1227 20:16:42.236231  337106 system_pods.go:61] "kube-proxy-nqr7h" [d0fc3ef5-765a-4376-94e6-42237908d3fd] Running
	I1227 20:16:42.236262  337106 system_pods.go:61] "kube-scheduler-ha-422549" [549e105d-d2e7-42b6-ae48-098d590e7b1d] Running
	I1227 20:16:42.236297  337106 system_pods.go:61] "kube-scheduler-ha-422549-m02" [db9187da-87a8-4b73-baea-76f3d9ef35c7] Running
	I1227 20:16:42.236326  337106 system_pods.go:61] "kube-scheduler-ha-422549-m03" [2a6b70b3-5303-404f-8b1d-1a65b9b81555] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:16:42.236352  337106 system_pods.go:61] "kube-vip-ha-422549" [32d647ce-90ed-4f56-b4c8-7ed445019d88] Running
	I1227 20:16:42.236391  337106 system_pods.go:61] "kube-vip-ha-422549-m02" [ddde9374-24b7-498d-b829-6902c612b272] Running
	I1227 20:16:42.236414  337106 system_pods.go:61] "kube-vip-ha-422549-m03" [39a60c56-1bf0-4232-9af0-f55e0c66a33d] Running
	I1227 20:16:42.236441  337106 system_pods.go:61] "storage-provisioner" [0d645eab-223f-4dd6-9518-6ab4a21d4c09] Running
	I1227 20:16:42.236483  337106 system_pods.go:74] duration metric: took 18.617239ms to wait for pod list to return data ...
	I1227 20:16:42.236522  337106 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:16:42.247926  337106 default_sa.go:45] found service account: "default"
	I1227 20:16:42.248004  337106 default_sa.go:55] duration metric: took 11.459641ms for default service account to be created ...
	I1227 20:16:42.248030  337106 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:16:42.261989  337106 system_pods.go:86] 26 kube-system pods found
	I1227 20:16:42.262126  337106 system_pods.go:89] "coredns-7d764666f9-mf5xw" [5a7f58c2-f991-46f0-9ece-9a561d53d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:42.262177  337106 system_pods.go:89] "coredns-7d764666f9-n5d9d" [159febfd-c1e4-4897-a372-59e4a3069914] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:42.262207  337106 system_pods.go:89] "etcd-ha-422549" [8f26f563-e734-4add-aefe-484f0e873a1e] Running
	I1227 20:16:42.262236  337106 system_pods.go:89] "etcd-ha-422549-m02" [5fed7e48-07c4-4a07-b63b-0fccbd196f6f] Running
	I1227 20:16:42.262283  337106 system_pods.go:89] "etcd-ha-422549-m03" [d22f78a1-2f4c-41e6-b65a-bf7108686c71] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:16:42.262312  337106 system_pods.go:89] "kindnet-28svl" [1494f795-941f-418e-8090-098225eb9c6a] Running
	I1227 20:16:42.262338  337106 system_pods.go:89] "kindnet-4hl7v" [ea2cc8a1-df16-440c-a093-a5d915b249b4] Running
	I1227 20:16:42.262359  337106 system_pods.go:89] "kindnet-5wczs" [df3d7298-4140-464f-a6e8-c614e1683488] Running
	I1227 20:16:42.262394  337106 system_pods.go:89] "kindnet-qkqmv" [66d834ae-af1b-456d-ae48-8a0d6608f961] Running
	I1227 20:16:42.262426  337106 system_pods.go:89] "kube-apiserver-ha-422549" [14f8e794-2ba7-477d-806b-03dd5a33d868] Running
	I1227 20:16:42.262449  337106 system_pods.go:89] "kube-apiserver-ha-422549-m02" [a4b97cc6-26ef-4d46-9ef9-bdee08eb89d6] Running
	I1227 20:16:42.262479  337106 system_pods.go:89] "kube-apiserver-ha-422549-m03" [71f23288-3e33-4bc8-9182-08c190ae026f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:16:42.262522  337106 system_pods.go:89] "kube-controller-manager-ha-422549" [b69af60f-4eac-4e85-aa81-66b7616a46f6] Running
	I1227 20:16:42.262568  337106 system_pods.go:89] "kube-controller-manager-ha-422549-m02" [07c0e68f-76e5-4cee-92a2-05dd2fb4c3e2] Running
	I1227 20:16:42.262604  337106 system_pods.go:89] "kube-controller-manager-ha-422549-m03" [af291694-2986-455c-8588-c2879d10ff3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:42.262654  337106 system_pods.go:89] "kube-proxy-cg4z5" [42f74e61-eb67-4d02-8f08-f77f7163f5fc] Running
	I1227 20:16:42.262691  337106 system_pods.go:89] "kube-proxy-kscg6" [baa716d5-546a-4922-ba51-fe1116e36c75] Running
	I1227 20:16:42.262719  337106 system_pods.go:89] "kube-proxy-mhmmn" [d69029af-1fc4-4a31-913e-92e1231e845a] Running
	I1227 20:16:42.262764  337106 system_pods.go:89] "kube-proxy-nqr7h" [d0fc3ef5-765a-4376-94e6-42237908d3fd] Running
	I1227 20:16:42.262793  337106 system_pods.go:89] "kube-scheduler-ha-422549" [549e105d-d2e7-42b6-ae48-098d590e7b1d] Running
	I1227 20:16:42.262821  337106 system_pods.go:89] "kube-scheduler-ha-422549-m02" [db9187da-87a8-4b73-baea-76f3d9ef35c7] Running
	I1227 20:16:42.262867  337106 system_pods.go:89] "kube-scheduler-ha-422549-m03" [2a6b70b3-5303-404f-8b1d-1a65b9b81555] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:16:42.262896  337106 system_pods.go:89] "kube-vip-ha-422549" [32d647ce-90ed-4f56-b4c8-7ed445019d88] Running
	I1227 20:16:42.262923  337106 system_pods.go:89] "kube-vip-ha-422549-m02" [ddde9374-24b7-498d-b829-6902c612b272] Running
	I1227 20:16:42.262973  337106 system_pods.go:89] "kube-vip-ha-422549-m03" [39a60c56-1bf0-4232-9af0-f55e0c66a33d] Running
	I1227 20:16:42.263009  337106 system_pods.go:89] "storage-provisioner" [0d645eab-223f-4dd6-9518-6ab4a21d4c09] Running
	I1227 20:16:42.263038  337106 system_pods.go:126] duration metric: took 14.987495ms to wait for k8s-apps to be running ...
	I1227 20:16:42.263064  337106 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:16:42.263186  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:16:42.329952  337106 system_svc.go:56] duration metric: took 66.879518ms WaitForService to wait for kubelet
	I1227 20:16:42.330045  337106 kubeadm.go:587] duration metric: took 14.393713186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:16:42.330082  337106 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:16:42.334874  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:42.334956  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:42.334985  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:42.335008  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:42.335041  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:42.335069  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:42.335090  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:42.335112  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:42.335144  337106 node_conditions.go:105] duration metric: took 5.018461ms to run NodePressure ...
	I1227 20:16:42.335178  337106 start.go:242] waiting for startup goroutines ...
	I1227 20:16:42.335217  337106 start.go:256] writing updated cluster config ...
	I1227 20:16:42.338858  337106 out.go:203] 
	I1227 20:16:42.342208  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:42.342412  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:42.346339  337106 out.go:179] * Starting "ha-422549-m04" worker node in "ha-422549" cluster
	I1227 20:16:42.350180  337106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:16:42.353431  337106 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:16:42.356594  337106 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:16:42.356748  337106 cache.go:65] Caching tarball of preloaded images
	I1227 20:16:42.356702  337106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:16:42.357174  337106 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:16:42.357212  337106 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:16:42.357376  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:42.393103  337106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:16:42.393129  337106 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:16:42.393143  337106 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:16:42.393176  337106 start.go:360] acquireMachinesLock for ha-422549-m04: {Name:mk6b025464d8c3992b9046b379a06dcb477a1541 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:16:42.393245  337106 start.go:364] duration metric: took 45.324µs to acquireMachinesLock for "ha-422549-m04"
	I1227 20:16:42.393264  337106 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:16:42.393270  337106 fix.go:54] fixHost starting: m04
	I1227 20:16:42.393757  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m04 --format={{.State.Status}}
	I1227 20:16:42.411553  337106 fix.go:112] recreateIfNeeded on ha-422549-m04: state=Stopped err=<nil>
	W1227 20:16:42.411578  337106 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:16:42.414835  337106 out.go:252] * Restarting existing docker container for "ha-422549-m04" ...
	I1227 20:16:42.414929  337106 cli_runner.go:164] Run: docker start ha-422549-m04
	I1227 20:16:42.767967  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m04 --format={{.State.Status}}
	I1227 20:16:42.792044  337106 kic.go:430] container "ha-422549-m04" state is running.
	I1227 20:16:42.792404  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m04
	I1227 20:16:42.827351  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:42.827599  337106 machine.go:94] provisionDockerMachine start ...
	I1227 20:16:42.827669  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:42.865289  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:42.865636  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 20:16:42.865647  337106 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:16:42.866300  337106 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43686->127.0.0.1:33198: read: connection reset by peer
	I1227 20:16:46.033368  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m04
	
	I1227 20:16:46.033393  337106 ubuntu.go:182] provisioning hostname "ha-422549-m04"
	I1227 20:16:46.033521  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:46.061318  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:46.061712  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 20:16:46.061729  337106 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549-m04 && echo "ha-422549-m04" | sudo tee /etc/hostname
	I1227 20:16:46.247170  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m04
	
	I1227 20:16:46.247258  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:46.267833  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:46.268212  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 20:16:46.268238  337106 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:16:46.421793  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:16:46.421817  337106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:16:46.421834  337106 ubuntu.go:190] setting up certificates
	I1227 20:16:46.421844  337106 provision.go:84] configureAuth start
	I1227 20:16:46.421907  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m04
	I1227 20:16:46.450717  337106 provision.go:143] copyHostCerts
	I1227 20:16:46.450775  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:16:46.450808  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:16:46.450827  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:16:46.450912  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:16:46.450998  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:16:46.451024  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:16:46.451029  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:16:46.451060  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:16:46.451106  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:16:46.451128  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:16:46.451133  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:16:46.451165  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:16:46.451217  337106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549-m04 san=[127.0.0.1 192.168.49.5 ha-422549-m04 localhost minikube]
	I1227 20:16:46.849291  337106 provision.go:177] copyRemoteCerts
	I1227 20:16:46.849383  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:16:46.849466  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:46.871414  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m04/id_rsa Username:docker}
	I1227 20:16:46.969387  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:16:46.969501  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:16:46.998452  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:16:46.998518  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 20:16:47.021097  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:16:47.021160  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:16:47.040293  337106 provision.go:87] duration metric: took 618.436373ms to configureAuth
	I1227 20:16:47.040318  337106 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:16:47.040553  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:47.040650  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:47.060413  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:47.060713  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 20:16:47.060726  337106 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:16:47.416575  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:16:47.416595  337106 machine.go:97] duration metric: took 4.588981536s to provisionDockerMachine
	I1227 20:16:47.416607  337106 start.go:293] postStartSetup for "ha-422549-m04" (driver="docker")
	I1227 20:16:47.416618  337106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:16:47.416709  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:16:47.416753  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:47.436074  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m04/id_rsa Username:docker}
	I1227 20:16:47.541369  337106 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:16:47.545584  337106 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:16:47.545615  337106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:16:47.545627  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:16:47.545689  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:16:47.545788  337106 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:16:47.545802  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:16:47.545901  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:16:47.553680  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:16:47.574171  337106 start.go:296] duration metric: took 157.548886ms for postStartSetup
	I1227 20:16:47.574295  337106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:16:47.574343  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:47.591734  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m04/id_rsa Username:docker}
	I1227 20:16:47.691874  337106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:16:47.696839  337106 fix.go:56] duration metric: took 5.303562652s for fixHost
	I1227 20:16:47.696874  337106 start.go:83] releasing machines lock for "ha-422549-m04", held for 5.303620217s
	I1227 20:16:47.696941  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m04
	I1227 20:16:47.722974  337106 out.go:179] * Found network options:
	I1227 20:16:47.725907  337106 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1227 20:16:47.728701  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:47.728735  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:47.728747  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:47.728789  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:47.728805  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:47.728815  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 20:16:47.728903  337106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:16:47.728946  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:47.729221  337106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:16:47.729281  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:47.750771  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m04/id_rsa Username:docker}
	I1227 20:16:47.772821  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m04/id_rsa Username:docker}
	I1227 20:16:47.915331  337106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:16:47.990713  337106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:16:47.990795  337106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:16:48.000448  337106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:16:48.000481  337106 start.go:496] detecting cgroup driver to use...
	I1227 20:16:48.000514  337106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:16:48.000573  337106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:16:48.021384  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:16:48.039922  337106 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:16:48.040026  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:16:48.062813  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:16:48.079604  337106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:16:48.252416  337106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:16:48.379968  337106 docker.go:234] disabling docker service ...
	I1227 20:16:48.380079  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:16:48.396866  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:16:48.412804  337106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:16:48.580976  337106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:16:48.708477  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:16:48.723957  337106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:16:48.740271  337106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:16:48.740353  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.751954  337106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:16:48.752031  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.770376  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.788562  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.800161  337106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:16:48.809833  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.820365  337106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.838111  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.851461  337106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:16:48.859082  337106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:16:48.867125  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:49.040301  337106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:16:49.267978  337106 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:16:49.268078  337106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:16:49.275575  337106 start.go:574] Will wait 60s for crictl version
	I1227 20:16:49.275679  337106 ssh_runner.go:195] Run: which crictl
	I1227 20:16:49.281419  337106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:16:49.315494  337106 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:16:49.315644  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:16:49.369281  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:16:49.404637  337106 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:16:49.407552  337106 out.go:179]   - env NO_PROXY=192.168.49.2
	I1227 20:16:49.411293  337106 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1227 20:16:49.414211  337106 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1227 20:16:49.417170  337106 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:16:49.439158  337106 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:16:49.443392  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:16:49.460241  337106 mustload.go:66] Loading cluster: ha-422549
	I1227 20:16:49.460498  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:49.460747  337106 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:16:49.491043  337106 host.go:66] Checking if "ha-422549" exists ...
	I1227 20:16:49.491329  337106 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.5
	I1227 20:16:49.491337  337106 certs.go:195] generating shared ca certs ...
	I1227 20:16:49.491350  337106 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:16:49.491459  337106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:16:49.491497  337106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:16:49.491508  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:16:49.491519  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:16:49.491530  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:16:49.491540  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:16:49.491593  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:16:49.491624  337106 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:16:49.491632  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:16:49.491659  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:16:49.491683  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:16:49.491705  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:16:49.491748  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:16:49.491776  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:16:49.491789  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:49.491812  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:16:49.491829  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:16:49.515784  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:16:49.544429  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:16:49.565837  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:16:49.591774  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:16:49.613222  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:16:49.642392  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:16:49.671654  337106 ssh_runner.go:195] Run: openssl version
	I1227 20:16:49.680550  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:16:49.689578  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:16:49.699039  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:16:49.704553  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:16:49.704616  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:16:49.749850  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:16:49.758256  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:49.766307  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:16:49.776970  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:49.780927  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:49.781029  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:49.822773  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:16:49.830459  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:16:49.838202  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:16:49.847286  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:16:49.851257  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:16:49.851323  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:16:49.895472  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:16:49.903822  337106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:16:49.907501  337106 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:16:49.907548  337106 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.35.0 crio false true} ...
	I1227 20:16:49.907686  337106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:16:49.907776  337106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:16:49.915527  337106 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:16:49.915638  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1227 20:16:49.923067  337106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 20:16:49.936470  337106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:16:49.951403  337106 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:16:49.955422  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:16:49.965541  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:50.111024  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:16:50.130778  337106 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1227 20:16:50.131217  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:50.136553  337106 out.go:179] * Verifying Kubernetes components...
	I1227 20:16:50.139597  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:50.312113  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:16:50.327943  337106 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1227 20:16:50.328030  337106 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1227 20:16:50.328306  337106 node_ready.go:35] waiting up to 6m0s for node "ha-422549-m04" to be "Ready" ...
	I1227 20:16:51.834080  337106 node_ready.go:49] node "ha-422549-m04" is "Ready"
	I1227 20:16:51.834112  337106 node_ready.go:38] duration metric: took 1.505787179s for node "ha-422549-m04" to be "Ready" ...
	I1227 20:16:51.834136  337106 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:16:51.834194  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:16:51.847783  337106 system_svc.go:56] duration metric: took 13.639755ms WaitForService to wait for kubelet
	I1227 20:16:51.847815  337106 kubeadm.go:587] duration metric: took 1.71699582s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:16:51.847835  337106 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:16:51.851110  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:51.851141  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:51.851154  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:51.851159  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:51.851164  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:51.851171  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:51.851174  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:51.851178  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:51.851184  337106 node_conditions.go:105] duration metric: took 3.342441ms to run NodePressure ...
	I1227 20:16:51.851198  337106 start.go:242] waiting for startup goroutines ...
	I1227 20:16:51.851223  337106 start.go:256] writing updated cluster config ...
	I1227 20:16:51.851550  337106 ssh_runner.go:195] Run: rm -f paused
	I1227 20:16:51.855763  337106 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:16:51.856293  337106 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 20:16:51.875834  337106 pod_ready.go:83] waiting for pod "coredns-7d764666f9-mf5xw" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 20:16:53.883849  337106 pod_ready.go:104] pod "coredns-7d764666f9-mf5xw" is not "Ready", error: <nil>
	W1227 20:16:56.461572  337106 pod_ready.go:104] pod "coredns-7d764666f9-mf5xw" is not "Ready", error: <nil>
	I1227 20:16:56.881855  337106 pod_ready.go:94] pod "coredns-7d764666f9-mf5xw" is "Ready"
	I1227 20:16:56.881886  337106 pod_ready.go:86] duration metric: took 5.006014091s for pod "coredns-7d764666f9-mf5xw" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.881896  337106 pod_ready.go:83] waiting for pod "coredns-7d764666f9-n5d9d" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.887788  337106 pod_ready.go:94] pod "coredns-7d764666f9-n5d9d" is "Ready"
	I1227 20:16:56.887818  337106 pod_ready.go:86] duration metric: took 5.91483ms for pod "coredns-7d764666f9-n5d9d" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.891258  337106 pod_ready.go:83] waiting for pod "etcd-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.898397  337106 pod_ready.go:94] pod "etcd-ha-422549" is "Ready"
	I1227 20:16:56.898437  337106 pod_ready.go:86] duration metric: took 7.137144ms for pod "etcd-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.898449  337106 pod_ready.go:83] waiting for pod "etcd-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.906314  337106 pod_ready.go:94] pod "etcd-ha-422549-m02" is "Ready"
	I1227 20:16:56.906341  337106 pod_ready.go:86] duration metric: took 7.885849ms for pod "etcd-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.906352  337106 pod_ready.go:83] waiting for pod "etcd-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:57.076308  337106 request.go:683] "Waited before sending request" delay="167.221744ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m03"
	I1227 20:16:57.080536  337106 pod_ready.go:94] pod "etcd-ha-422549-m03" is "Ready"
	I1227 20:16:57.080564  337106 pod_ready.go:86] duration metric: took 174.205244ms for pod "etcd-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:57.276888  337106 request.go:683] "Waited before sending request" delay="196.187905ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1227 20:16:57.280390  337106 pod_ready.go:83] waiting for pod "kube-apiserver-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:57.476826  337106 request.go:683] "Waited before sending request" delay="196.340204ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-422549"
	I1227 20:16:57.677055  337106 request.go:683] "Waited before sending request" delay="195.372363ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549"
	I1227 20:16:57.680148  337106 pod_ready.go:94] pod "kube-apiserver-ha-422549" is "Ready"
	I1227 20:16:57.680173  337106 pod_ready.go:86] duration metric: took 399.753981ms for pod "kube-apiserver-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:57.680183  337106 pod_ready.go:83] waiting for pod "kube-apiserver-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:57.876636  337106 request.go:683] "Waited before sending request" delay="196.366115ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-422549-m02"
	I1227 20:16:58.076883  337106 request.go:683] "Waited before sending request" delay="195.240889ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m02"
	I1227 20:16:58.081595  337106 pod_ready.go:94] pod "kube-apiserver-ha-422549-m02" is "Ready"
	I1227 20:16:58.081624  337106 pod_ready.go:86] duration metric: took 401.434113ms for pod "kube-apiserver-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:58.081636  337106 pod_ready.go:83] waiting for pod "kube-apiserver-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:58.277078  337106 request.go:683] "Waited before sending request" delay="195.329053ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-422549-m03"
	I1227 20:16:58.476156  337106 request.go:683] "Waited before sending request" delay="193.265737ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m03"
	I1227 20:16:58.479583  337106 pod_ready.go:94] pod "kube-apiserver-ha-422549-m03" is "Ready"
	I1227 20:16:58.479609  337106 pod_ready.go:86] duration metric: took 397.939042ms for pod "kube-apiserver-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:58.677038  337106 request.go:683] "Waited before sending request" delay="197.311256ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1227 20:16:58.680893  337106 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:58.876237  337106 request.go:683] "Waited before sending request" delay="195.249704ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-422549"
	I1227 20:16:59.076160  337106 request.go:683] "Waited before sending request" delay="194.26927ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549"
	I1227 20:16:59.079502  337106 pod_ready.go:94] pod "kube-controller-manager-ha-422549" is "Ready"
	I1227 20:16:59.079531  337106 pod_ready.go:86] duration metric: took 398.612222ms for pod "kube-controller-manager-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:59.079542  337106 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:59.276926  337106 request.go:683] "Waited before sending request" delay="197.310947ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-422549-m02"
	I1227 20:16:59.476987  337106 request.go:683] "Waited before sending request" delay="195.346795ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m02"
	I1227 20:16:59.480256  337106 pod_ready.go:94] pod "kube-controller-manager-ha-422549-m02" is "Ready"
	I1227 20:16:59.480288  337106 pod_ready.go:86] duration metric: took 400.738794ms for pod "kube-controller-manager-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:59.480298  337106 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:59.676709  337106 request.go:683] "Waited before sending request" delay="196.313782ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-422549-m03"
	I1227 20:16:59.876936  337106 request.go:683] "Waited before sending request" delay="194.422474ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m03"
	I1227 20:16:59.880871  337106 pod_ready.go:94] pod "kube-controller-manager-ha-422549-m03" is "Ready"
	I1227 20:16:59.880898  337106 pod_ready.go:86] duration metric: took 400.592723ms for pod "kube-controller-manager-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:00.077121  337106 request.go:683] "Waited before sending request" delay="196.103919ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1227 20:17:00.089664  337106 pod_ready.go:83] waiting for pod "kube-proxy-cg4z5" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:00.277067  337106 request.go:683] "Waited before sending request" delay="187.22976ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg4z5"
	I1227 20:17:00.476439  337106 request.go:683] "Waited before sending request" delay="191.18971ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m03"
	I1227 20:17:00.480835  337106 pod_ready.go:94] pod "kube-proxy-cg4z5" is "Ready"
	I1227 20:17:00.480892  337106 pod_ready.go:86] duration metric: took 391.133363ms for pod "kube-proxy-cg4z5" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:00.480907  337106 pod_ready.go:83] waiting for pod "kube-proxy-kscg6" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:00.676146  337106 request.go:683] "Waited before sending request" delay="195.116873ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kscg6"
	I1227 20:17:00.876152  337106 request.go:683] "Waited before sending request" delay="192.262917ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m04"
	I1227 20:17:00.881008  337106 pod_ready.go:94] pod "kube-proxy-kscg6" is "Ready"
	I1227 20:17:00.881038  337106 pod_ready.go:86] duration metric: took 400.122065ms for pod "kube-proxy-kscg6" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:00.881048  337106 pod_ready.go:83] waiting for pod "kube-proxy-mhmmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:01.076325  337106 request.go:683] "Waited before sending request" delay="195.195166ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mhmmn"
	I1227 20:17:01.276909  337106 request.go:683] "Waited before sending request" delay="195.293101ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549"
	I1227 20:17:01.280680  337106 pod_ready.go:94] pod "kube-proxy-mhmmn" is "Ready"
	I1227 20:17:01.280710  337106 pod_ready.go:86] duration metric: took 399.654071ms for pod "kube-proxy-mhmmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:01.280722  337106 pod_ready.go:83] waiting for pod "kube-proxy-nqr7h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:01.476964  337106 request.go:683] "Waited before sending request" delay="196.12986ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqr7h"
	I1227 20:17:01.676540  337106 request.go:683] "Waited before sending request" delay="192.49818ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m02"
	I1227 20:17:01.685668  337106 pod_ready.go:94] pod "kube-proxy-nqr7h" is "Ready"
	I1227 20:17:01.685702  337106 pod_ready.go:86] duration metric: took 404.972449ms for pod "kube-proxy-nqr7h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:01.876169  337106 request.go:683] "Waited before sending request" delay="190.319322ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1227 20:17:01.882184  337106 pod_ready.go:83] waiting for pod "kube-scheduler-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:02.076778  337106 request.go:683] "Waited before sending request" delay="194.39653ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-422549"
	I1227 20:17:02.277097  337106 request.go:683] "Waited before sending request" delay="189.264505ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549"
	I1227 20:17:02.281682  337106 pod_ready.go:94] pod "kube-scheduler-ha-422549" is "Ready"
	I1227 20:17:02.281718  337106 pod_ready.go:86] duration metric: took 399.422109ms for pod "kube-scheduler-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:02.281728  337106 pod_ready.go:83] waiting for pod "kube-scheduler-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:02.477021  337106 request.go:683] "Waited before sending request" delay="195.180295ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-422549-m02"
	I1227 20:17:02.676336  337106 request.go:683] "Waited before sending request" delay="193.224619ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m02"
	I1227 20:17:02.680037  337106 pod_ready.go:94] pod "kube-scheduler-ha-422549-m02" is "Ready"
	I1227 20:17:02.680112  337106 pod_ready.go:86] duration metric: took 398.375125ms for pod "kube-scheduler-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:02.680126  337106 pod_ready.go:83] waiting for pod "kube-scheduler-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:02.876405  337106 request.go:683] "Waited before sending request" delay="196.195019ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-422549-m03"
	I1227 20:17:03.076174  337106 request.go:683] "Waited before sending request" delay="195.233596ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m03"
	I1227 20:17:03.079768  337106 pod_ready.go:94] pod "kube-scheduler-ha-422549-m03" is "Ready"
	I1227 20:17:03.079800  337106 pod_ready.go:86] duration metric: took 399.666897ms for pod "kube-scheduler-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:03.079847  337106 pod_ready.go:40] duration metric: took 11.224018864s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:17:03.152145  337106 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 20:17:03.155161  337106 out.go:203] 
	W1227 20:17:03.158240  337106 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 20:17:03.161317  337106 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:17:03.164544  337106 out.go:179] * Done! kubectl is now configured to use "ha-422549" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:16:14 ha-422549 crio[669]: time="2025-12-27T20:16:14.963662144Z" level=info msg="Started container" PID=1165 containerID=e30e2fc201d45a408198fe1cf19728fccd5ebe17d0f5255f7589564c690889ec description=kube-system/kube-proxy-mhmmn/kube-proxy id=83f9017b-13c2-4c2b-927f-e22b6986096d name=/runtime.v1.RuntimeService/StartContainer sandboxID=6495c9a31e01c2f5ac17768f9f5e13a5423c5594fc2867804e3bb0a908221252
	Dec 27 20:16:45 ha-422549 conmon[1143]: conmon 7acd50dc5298fb99db44 <ninfo>: container 1152 exited with status 1
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.428315945Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f60cbd10-f7b2-4cd1-80a7-fccba0550911 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.43511179Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=994ad400-2597-4615-b648-cdef116922a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.438853907Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=52ecd850-72c2-4d8c-abb4-bcb68b155882 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.438953761Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.446454815Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.447683161Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/be0e461cabdcf17f5b8d1bb2222c3a204fd930be36abbb0859da36ab3d16462f/merged/etc/passwd: no such file or directory"
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.447776861Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/be0e461cabdcf17f5b8d1bb2222c3a204fd930be36abbb0859da36ab3d16462f/merged/etc/group: no such file or directory"
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.448117445Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.466884564Z" level=info msg="Created container 7361d14a41eae128627f7ec4143721dd6bb4d3ae719e332d08bda13887aca146: kube-system/storage-provisioner/storage-provisioner" id=52ecd850-72c2-4d8c-abb4-bcb68b155882 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.472967068Z" level=info msg="Starting container: 7361d14a41eae128627f7ec4143721dd6bb4d3ae719e332d08bda13887aca146" id=34f02e50-7595-4b71-82ea-dc48fe422b8c name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.475650188Z" level=info msg="Started container" PID=1422 containerID=7361d14a41eae128627f7ec4143721dd6bb4d3ae719e332d08bda13887aca146 description=kube-system/storage-provisioner/storage-provisioner id=34f02e50-7595-4b71-82ea-dc48fe422b8c name=/runtime.v1.RuntimeService/StartContainer sandboxID=735879ad1c236176f8b5399b57a79b6c0ab6195af5a05ee38eac2aa69480249f
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.268998026Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.274112141Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.274149957Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.274171495Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.277419129Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.277535811Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.27759697Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.281296488Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.281332581Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.281356277Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.285112877Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.28514943Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	7361d14a41eae       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   20 seconds ago       Running             storage-provisioner       4                   735879ad1c236       storage-provisioner                 kube-system
	7879d1a6c6a98       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf   50 seconds ago       Running             coredns                   2                   bd06f2852a595       coredns-7d764666f9-mf5xw            kube-system
	0fb071b8bd6b6       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   51 seconds ago       Running             busybox                   2                   cf93f418a9a0a       busybox-769dd8b7dd-k7ks6            default
	7acd50dc5298f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   51 seconds ago       Exited              storage-provisioner       3                   735879ad1c236       storage-provisioner                 kube-system
	e30e2fc201d45       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   51 seconds ago       Running             kube-proxy                2                   6495c9a31e01c       kube-proxy-mhmmn                    kube-system
	595cf90732ea1       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf   51 seconds ago       Running             coredns                   2                   6e45d9e1ac155       coredns-7d764666f9-n5d9d            kube-system
	f4b4244b1db16       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   51 seconds ago       Running             kindnet-cni               2                   828118b404202       kindnet-qkqmv                       kube-system
	8a1b0b47a0ed1       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   51 seconds ago       Running             kube-controller-manager   7                   75a2af3dd93e9       kube-controller-manager-ha-422549   kube-system
	acdd287d4087f       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   About a minute ago   Running             kube-scheduler            2                   ee19621eddf01       kube-scheduler-ha-422549            kube-system
	7c4ac1dbe59ad       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   About a minute ago   Exited              kube-controller-manager   6                   75a2af3dd93e9       kube-controller-manager-ha-422549   kube-system
	6b0b91d1da0a4       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   About a minute ago   Running             kube-apiserver            3                   025c49d6ec070       kube-apiserver-ha-422549            kube-system
	776b31832bd3b       28c5662932f6032ee4faba083d9c2af90232797e1d4f89d9892cb92b26fec299   About a minute ago   Running             kube-vip                  1                   66af5fba1f89e       kube-vip-ha-422549                  kube-system
	97ce57129ce3b       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   About a minute ago   Running             etcd                      2                   77b191af13e7e       etcd-ha-422549                      kube-system
	
	
	==> coredns [595cf90732ea108872ec4fb5764679f01619c8baa8a4aca8307dd9cb64a9120f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:35202 - 54427 "HINFO IN 8582221969168170305.1983723465531701443. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.038347152s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> coredns [7879d1a6c6a98b3b227de2b37ae12cd1a3492d804d3ec108fe982379de5ffd0c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:46822 - 1915 "HINFO IN 1020865313171851806.989409873494633985. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013088569s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-422549
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_03_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:03:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:17:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:16:35 +0000   Sat, 27 Dec 2025 20:03:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:16:35 +0000   Sat, 27 Dec 2025 20:03:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:16:35 +0000   Sat, 27 Dec 2025 20:03:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:16:35 +0000   Sat, 27 Dec 2025 20:09:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-422549
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                acd356f3-8732-454f-9ea5-4ebb90b80a04
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-k7ks6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7d764666f9-mf5xw             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-7d764666f9-n5d9d             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-422549                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-qkqmv                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-422549             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-422549    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-mhmmn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-422549             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-422549                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  12m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  7m17s  node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  49s    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  48s    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  23s    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	
	
	Name:               ha-422549-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_04_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:04:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:17:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:17:04 +0000   Sat, 27 Dec 2025 20:16:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:17:04 +0000   Sat, 27 Dec 2025 20:16:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:17:04 +0000   Sat, 27 Dec 2025 20:16:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:17:04 +0000   Sat, 27 Dec 2025 20:16:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-422549-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                279e934d-6d34-4a11-83f0-a7f36011d6a2
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-v6vks                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-422549-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-5wczs                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-422549-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-422549-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-nqr7h                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-422549-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-422549-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  12m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  7m17s  node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  NodeNotReady    6m27s  node-controller  Node ha-422549-m02 status is now: NodeNotReady
	  Normal  RegisteredNode  49s    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  48s    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  23s    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	
	
	Name:               ha-422549-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_04_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:04:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:17:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:16:41 +0000   Sat, 27 Dec 2025 20:16:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:16:41 +0000   Sat, 27 Dec 2025 20:16:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:16:41 +0000   Sat, 27 Dec 2025 20:16:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:16:41 +0000   Sat, 27 Dec 2025 20:16:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-422549-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                dd826b6d-21ec-45c4-b392-2d4b9b2daddb
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-qcz4b                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-422549-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-28svl                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-422549-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-422549-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-cg4z5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-422549-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-422549-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  12m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  12m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  12m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  7m17s  node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  NodeNotReady    6m27s  node-controller  Node ha-422549-m03 status is now: NodeNotReady
	  Normal  RegisteredNode  49s    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  48s    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  23s    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	
	
	Name:               ha-422549-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_05_33_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:05:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:17:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:16:51 +0000   Sat, 27 Dec 2025 20:16:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:16:51 +0000   Sat, 27 Dec 2025 20:16:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:16:51 +0000   Sat, 27 Dec 2025 20:16:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:16:51 +0000   Sat, 27 Dec 2025 20:16:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-422549-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                45c0e480-898e-46d5-83ce-c457d7b4b021
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4hl7v       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-proxy-kscg6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  7m17s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  NodeNotReady    6m27s  node-controller  Node ha-422549-m04 status is now: NodeNotReady
	  Normal  RegisteredNode  49s    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  48s    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  23s    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	
	
	==> dmesg <==
	[Dec27 19:27] overlayfs: idmapped layers are currently not supported
	[Dec27 19:28] overlayfs: idmapped layers are currently not supported
	[ +28.388596] overlayfs: idmapped layers are currently not supported
	[Dec27 19:29] overlayfs: idmapped layers are currently not supported
	[  +9.242530] overlayfs: idmapped layers are currently not supported
	[Dec27 19:30] overlayfs: idmapped layers are currently not supported
	[ +11.577339] overlayfs: idmapped layers are currently not supported
	[Dec27 19:32] overlayfs: idmapped layers are currently not supported
	[ +19.186532] overlayfs: idmapped layers are currently not supported
	[Dec27 19:34] overlayfs: idmapped layers are currently not supported
	[Dec27 19:54] kauditd_printk_skb: 8 callbacks suppressed
	[Dec27 19:56] overlayfs: idmapped layers are currently not supported
	[Dec27 19:59] overlayfs: idmapped layers are currently not supported
	[Dec27 20:00] overlayfs: idmapped layers are currently not supported
	[Dec27 20:03] overlayfs: idmapped layers are currently not supported
	[ +31.019083] overlayfs: idmapped layers are currently not supported
	[Dec27 20:04] overlayfs: idmapped layers are currently not supported
	[Dec27 20:05] overlayfs: idmapped layers are currently not supported
	[Dec27 20:06] overlayfs: idmapped layers are currently not supported
	[Dec27 20:07] overlayfs: idmapped layers are currently not supported
	[  +3.687478] overlayfs: idmapped layers are currently not supported
	[Dec27 20:15] overlayfs: idmapped layers are currently not supported
	[  +3.163851] overlayfs: idmapped layers are currently not supported
	[Dec27 20:16] overlayfs: idmapped layers are currently not supported
	[ +35.129102] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [97ce57129ce3bc803fd62d49e1f3f06d06aa64d93e2ef36f372084cbbd21e34a] <==
	{"level":"warn","ts":"2025-12-27T20:16:25.166605Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","error":"EOF"}
	{"level":"warn","ts":"2025-12-27T20:16:25.198331Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"1cbc45fdb1f38dc","error":"failed to dial 1cbc45fdb1f38dc on stream Message (EOF)"}
	{"level":"warn","ts":"2025-12-27T20:16:25.227922Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1cbc45fdb1f38dc","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:25.227903Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1cbc45fdb1f38dc","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:25.343755Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc"}
	{"level":"warn","ts":"2025-12-27T20:16:25.769831Z","caller":"etcdserver/cluster_util.go:261","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"1cbc45fdb1f38dc","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:25.769943Z","caller":"etcdserver/cluster_util.go:162","msg":"failed to get version","remote-member-id":"1cbc45fdb1f38dc","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:28.551578Z","caller":"rafthttp/stream.go:193","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc"}
	{"level":"warn","ts":"2025-12-27T20:16:29.772372Z","caller":"etcdserver/cluster_util.go:261","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"1cbc45fdb1f38dc","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:29.772423Z","caller":"etcdserver/cluster_util.go:162","msg":"failed to get version","remote-member-id":"1cbc45fdb1f38dc","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:30.231891Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1cbc45fdb1f38dc","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:30.231953Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1cbc45fdb1f38dc","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:33.773388Z","caller":"etcdserver/cluster_util.go:261","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"1cbc45fdb1f38dc","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:33.773438Z","caller":"etcdserver/cluster_util.go:162","msg":"failed to get version","remote-member-id":"1cbc45fdb1f38dc","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:35.232069Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1cbc45fdb1f38dc","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:35.232083Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1cbc45fdb1f38dc","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-12-27T20:16:37.257347Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"1cbc45fdb1f38dc","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-27T20:16:37.257386Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"1cbc45fdb1f38dc"}
	{"level":"info","ts":"2025-12-27T20:16:37.257399Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc"}
	{"level":"info","ts":"2025-12-27T20:16:37.267581Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"1cbc45fdb1f38dc","stream-type":"stream Message"}
	{"level":"info","ts":"2025-12-27T20:16:37.267620Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc"}
	{"level":"info","ts":"2025-12-27T20:16:37.295096Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc"}
	{"level":"info","ts":"2025-12-27T20:16:37.295396Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc"}
	{"level":"warn","ts":"2025-12-27T20:17:06.414126Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"199.990934ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 ","response":"range_response_count:500 size:371820"}
	{"level":"info","ts":"2025-12-27T20:17:06.414197Z","caller":"traceutil/trace.go:172","msg":"trace[128322948] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:500; response_revision:2982; }","duration":"200.078275ms","start":"2025-12-27T20:17:06.214106Z","end":"2025-12-27T20:17:06.414184Z","steps":["trace[128322948] 'range keys from bolt db'  (duration: 198.878572ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:17:06 up  1:59,  0 user,  load average: 1.60, 1.22, 1.36
	Linux ha-422549 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f4b4244b1db16ca451154424e89d4d56ce2b826c6f69b1c1fa82f892e7966881] <==
	E1227 20:16:45.273950       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1227 20:16:45.285766       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 20:16:45.285845       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1227 20:16:46.769029       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:16:46.769154       1 metrics.go:72] Registering metrics
	I1227 20:16:46.769261       1 controller.go:711] "Syncing nftables rules"
	I1227 20:16:55.268126       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1227 20:16:55.268228       1 main.go:324] Node ha-422549-m03 has CIDR [10.244.2.0/24] 
	I1227 20:16:55.268426       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.49.4 Flags: [] Table: 0 Realm: 0} 
	I1227 20:16:55.268521       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1227 20:16:55.268535       1 main.go:324] Node ha-422549-m04 has CIDR [10.244.3.0/24] 
	I1227 20:16:55.268588       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0 Realm: 0} 
	I1227 20:16:55.268639       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 20:16:55.268652       1 main.go:301] handling current node
	I1227 20:16:55.274378       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1227 20:16:55.277916       1 main.go:324] Node ha-422549-m02 has CIDR [10.244.1.0/24] 
	I1227 20:16:55.278084       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0 Realm: 0} 
	I1227 20:17:05.268989       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 20:17:05.269024       1 main.go:301] handling current node
	I1227 20:17:05.269041       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1227 20:17:05.269047       1 main.go:324] Node ha-422549-m02 has CIDR [10.244.1.0/24] 
	I1227 20:17:05.269196       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1227 20:17:05.269272       1 main.go:324] Node ha-422549-m03 has CIDR [10.244.2.0/24] 
	I1227 20:17:05.269415       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1227 20:17:05.269497       1 main.go:324] Node ha-422549-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [6b0b91d1da0a4c385d0d3110ebc1d18efbc54bab7d6da6bba31c072f2fbd4da9] <==
	I1227 20:16:13.796413       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:13.797072       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 20:16:13.797074       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 20:16:13.797100       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:13.797777       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 20:16:13.797963       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 20:16:13.798046       1 aggregator.go:187] initial CRD sync complete...
	I1227 20:16:13.798090       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 20:16:13.798127       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:16:13.798158       1 cache.go:39] Caches are synced for autoregister controller
	E1227 20:16:13.804997       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 20:16:13.818967       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:13.818980       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 20:16:13.819043       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:16:13.824892       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:16:13.829882       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:16:13.856520       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:16:13.903885       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:16:14.353399       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1227 20:16:16.144077       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1227 20:16:16.145490       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:16:16.162091       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:16:17.856302       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:16:18.028352       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 20:16:18.100041       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [7c4ac1dbe59ad7d3143dfe74886a6bc3058bfad37ae864b855a6e47c1a4d984e] <==
	I1227 20:15:51.302678       1 serving.go:386] Generated self-signed cert in-memory
	I1227 20:15:51.319186       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1227 20:15:51.319285       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:15:51.320999       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1227 20:15:51.321146       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1227 20:15:51.321625       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1227 20:15:51.321698       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1227 20:16:13.577648       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [8a1b0b47a0ed1caecc63a10c0f1f9666bd9ee325c50ecf1f6c7e085c9598dbfa] <==
	I1227 20:16:17.628599       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.628621       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.628679       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.633925       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.634025       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 20:16:17.634653       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.634716       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.634834       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.634959       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.635096       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.635317       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.635492       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.635766       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.656398       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.659067       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.751050       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m02"
	I1227 20:16:17.752259       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m03"
	I1227 20:16:17.752315       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m04"
	I1227 20:16:17.752343       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549"
	I1227 20:16:17.820816       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.820838       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:16:17.820843       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:16:17.829110       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.887401       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 20:16:51.537342       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-422549-m04"
	
	
	==> kube-proxy [e30e2fc201d45a408198fe1cf19728fccd5ebe17d0f5255f7589564c690889ec] <==
	I1227 20:16:15.717666       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:16:16.119519       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:16:16.241830       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:16.241930       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1227 20:16:16.242046       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:16:16.278310       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:16:16.278410       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:16:16.293265       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:16:16.293750       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:16:16.293812       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:16:16.298528       1 config.go:200] "Starting service config controller"
	I1227 20:16:16.298607       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:16:16.298663       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:16:16.298690       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:16:16.302047       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:16:16.303313       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:16:16.304201       1 config.go:309] "Starting node config controller"
	I1227 20:16:16.304276       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:16:16.304307       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:16:16.399041       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:16:16.402314       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 20:16:16.412735       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [acdd287d4087fec2c7c00eb589c13b06231128c1441e2db4a8f74c57600a6e67] <==
	I1227 20:16:11.576174       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:16:11.578273       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 20:16:11.585603       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:16:11.585856       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:16:11.585620       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1227 20:16:13.654680       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:16:13.654770       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:16:13.654897       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:16:13.654960       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 20:16:13.655015       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 20:16:13.655071       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 20:16:13.655125       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:16:13.655182       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 20:16:13.655240       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:16:13.655293       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:16:13.655342       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 20:16:13.655393       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:16:13.655511       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 20:16:13.655554       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:16:13.655597       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:16:13.655648       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:16:13.655681       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:16:13.655786       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 20:16:13.723865       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	I1227 20:16:15.292118       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:16:14 ha-422549 kubelet[804]: I1227 20:16:14.329398     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d69029af-1fc4-4a31-913e-92e1231e845a-lib-modules\") pod \"kube-proxy-mhmmn\" (UID: \"d69029af-1fc4-4a31-913e-92e1231e845a\") " pod="kube-system/kube-proxy-mhmmn"
	Dec 27 20:16:14 ha-422549 kubelet[804]: I1227 20:16:14.329542     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d69029af-1fc4-4a31-913e-92e1231e845a-xtables-lock\") pod \"kube-proxy-mhmmn\" (UID: \"d69029af-1fc4-4a31-913e-92e1231e845a\") " pod="kube-system/kube-proxy-mhmmn"
	Dec 27 20:16:14 ha-422549 kubelet[804]: I1227 20:16:14.329646     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66d834ae-af1b-456d-ae48-8a0d6608f961-xtables-lock\") pod \"kindnet-qkqmv\" (UID: \"66d834ae-af1b-456d-ae48-8a0d6608f961\") " pod="kube-system/kindnet-qkqmv"
	Dec 27 20:16:14 ha-422549 kubelet[804]: I1227 20:16:14.329783     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/66d834ae-af1b-456d-ae48-8a0d6608f961-cni-cfg\") pod \"kindnet-qkqmv\" (UID: \"66d834ae-af1b-456d-ae48-8a0d6608f961\") " pod="kube-system/kindnet-qkqmv"
	Dec 27 20:16:14 ha-422549 kubelet[804]: I1227 20:16:14.381247     804 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 27 20:16:14 ha-422549 kubelet[804]: W1227 20:16:14.683959     804 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/crio-735879ad1c236176f8b5399b57a79b6c0ab6195af5a05ee38eac2aa69480249f WatchSource:0}: Error finding container 735879ad1c236176f8b5399b57a79b6c0ab6195af5a05ee38eac2aa69480249f: Status 404 returned error can't find the container with id 735879ad1c236176f8b5399b57a79b6c0ab6195af5a05ee38eac2aa69480249f
	Dec 27 20:16:14 ha-422549 kubelet[804]: W1227 20:16:14.706665     804 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/crio-cf93f418a9a0a915233d2584b9d75339bc5bcc13264ad5d080fc2f42d9ebaff8 WatchSource:0}: Error finding container cf93f418a9a0a915233d2584b9d75339bc5bcc13264ad5d080fc2f42d9ebaff8: Status 404 returned error can't find the container with id cf93f418a9a0a915233d2584b9d75339bc5bcc13264ad5d080fc2f42d9ebaff8
	Dec 27 20:16:15 ha-422549 kubelet[804]: E1227 20:16:15.322797     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:16:15 ha-422549 kubelet[804]: E1227 20:16:15.333130     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:16:15 ha-422549 kubelet[804]: E1227 20:16:15.350577     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:16:15 ha-422549 kubelet[804]: I1227 20:16:15.550682     804 kubelet_node_status.go:74] "Attempting to register node" node="ha-422549"
	Dec 27 20:16:15 ha-422549 kubelet[804]: I1227 20:16:15.614938     804 kubelet_node_status.go:123] "Node was previously registered" node="ha-422549"
	Dec 27 20:16:15 ha-422549 kubelet[804]: I1227 20:16:15.615228     804 kubelet_node_status.go:77] "Successfully registered node" node="ha-422549"
	Dec 27 20:16:15 ha-422549 kubelet[804]: I1227 20:16:15.615315     804 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 27 20:16:15 ha-422549 kubelet[804]: I1227 20:16:15.616294     804 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 27 20:16:16 ha-422549 kubelet[804]: E1227 20:16:16.196898     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-ha-422549" containerName="kube-scheduler"
	Dec 27 20:16:16 ha-422549 kubelet[804]: E1227 20:16:16.353325     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:16:16 ha-422549 kubelet[804]: E1227 20:16:16.354607     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:16:20 ha-422549 kubelet[804]: E1227 20:16:20.687129     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:16:21 ha-422549 kubelet[804]: E1227 20:16:21.706076     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	Dec 27 20:16:22 ha-422549 kubelet[804]: E1227 20:16:22.368737     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	Dec 27 20:16:30 ha-422549 kubelet[804]: E1227 20:16:30.696140     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:16:45 ha-422549 kubelet[804]: I1227 20:16:45.426555     804 scope.go:122] "RemoveContainer" containerID="7acd50dc5298fb99db44502b466c9e34b79ddce5613479143c4c5834f09f1731"
	Dec 27 20:16:56 ha-422549 kubelet[804]: E1227 20:16:56.356173     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:16:56 ha-422549 kubelet[804]: E1227 20:16:56.356735     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-422549 -n ha-422549
helpers_test.go:270: (dbg) Run:  kubectl --context ha-422549 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (85.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.0843352s)
ha_test.go:415: expected profile "ha-422549" in json of 'profile list' to have "Degraded" status but have "HAppy" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-422549\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-422549\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesR
oot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.35.0\",\"ClusterName\":\"ha-422549\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name
\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-dev
ice-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\"
:false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000,\"Rosetta\":false},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-422549
helpers_test.go:244: (dbg) docker inspect ha-422549:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf",
	        "Created": "2025-12-27T20:03:01.682141141Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 337233,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:15:42.462104956Z",
	            "FinishedAt": "2025-12-27T20:15:41.57505881Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/hostname",
	        "HostsPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/hosts",
	        "LogPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf-json.log",
	        "Name": "/ha-422549",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-422549:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-422549",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf",
	                "LowerDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/merged",
	                "UpperDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/diff",
	                "WorkDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-422549",
	                "Source": "/var/lib/docker/volumes/ha-422549/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422549",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422549",
	                "name.minikube.sigs.k8s.io": "ha-422549",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bb71ec3c47b900c0fa3f8d54314b359c784cf244167438faa167df26866a5f2b",
	            "SandboxKey": "/var/run/docker/netns/bb71ec3c47b9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422549": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:de:7f:b9:2b:dc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9521cb9225c5842f69a8435c5cf5485b75f9a8b2c68158742ff27c2be32f5951",
	                    "EndpointID": "8d5c856b7af95de0f10e89f9cba406f7c7feb68311acbe9cee0239ed57d8152d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422549",
	                        "53fd780c3df5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-422549 -n ha-422549
helpers_test.go:253: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-422549 logs -n 25: (1.444400217s)
helpers_test.go:261: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-422549 cp ha-422549-m03:/home/docker/cp-test.txt ha-422549-m04:/home/docker/cp-test_ha-422549-m03_ha-422549-m04.txt               │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test_ha-422549-m03_ha-422549-m04.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp testdata/cp-test.txt ha-422549-m04:/home/docker/cp-test.txt                                                             │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3848759327/001/cp-test_ha-422549-m04.txt │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549:/home/docker/cp-test_ha-422549-m04_ha-422549.txt                       │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549 sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549.txt                                                 │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549-m02:/home/docker/cp-test_ha-422549-m04_ha-422549-m02.txt               │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m02 sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549-m02.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549-m03:/home/docker/cp-test_ha-422549-m04_ha-422549-m03.txt               │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m03 sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549-m03.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ node    │ ha-422549 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ node    │ ha-422549 node start m02 --alsologtostderr -v 5                                                                                      │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ node    │ ha-422549 node list --alsologtostderr -v 5                                                                                           │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │                     │
	│ stop    │ ha-422549 stop --alsologtostderr -v 5                                                                                                │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:07 UTC │
	│ start   │ ha-422549 start --wait true --alsologtostderr -v 5                                                                                   │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:07 UTC │                     │
	│ node    │ ha-422549 node list --alsologtostderr -v 5                                                                                           │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:15 UTC │                     │
	│ node    │ ha-422549 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:15 UTC │                     │
	│ stop    │ ha-422549 stop --alsologtostderr -v 5                                                                                                │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:15 UTC │ 27 Dec 25 20:15 UTC │
	│ start   │ ha-422549 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:15 UTC │ 27 Dec 25 20:17 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:15:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:15:42.161076  337106 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:15:42.161339  337106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:15:42.161371  337106 out.go:374] Setting ErrFile to fd 2...
	I1227 20:15:42.161395  337106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:15:42.161910  337106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:15:42.162549  337106 out.go:368] Setting JSON to false
	I1227 20:15:42.163583  337106 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7095,"bootTime":1766859448,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:15:42.163745  337106 start.go:143] virtualization:  
	I1227 20:15:42.167252  337106 out.go:179] * [ha-422549] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:15:42.171750  337106 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:15:42.172029  337106 notify.go:221] Checking for updates...
	I1227 20:15:42.178183  337106 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:15:42.181404  337106 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:15:42.184507  337106 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:15:42.187835  337106 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:15:42.191251  337106 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:15:42.194951  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:42.195780  337106 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:15:42.234793  337106 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:15:42.234922  337106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:15:42.302450  337106 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 20:15:42.291742685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:15:42.302570  337106 docker.go:319] overlay module found
	I1227 20:15:42.305766  337106 out.go:179] * Using the docker driver based on existing profile
	I1227 20:15:42.308585  337106 start.go:309] selected driver: docker
	I1227 20:15:42.308605  337106 start.go:928] validating driver "docker" against &{Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:15:42.308760  337106 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:15:42.308874  337106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:15:42.372262  337106 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 20:15:42.36286995 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:15:42.372694  337106 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:15:42.372727  337106 cni.go:84] Creating CNI manager for ""
	I1227 20:15:42.372789  337106 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1227 20:15:42.372841  337106 start.go:353] cluster config:
	{Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:15:42.376040  337106 out.go:179] * Starting "ha-422549" primary control-plane node in "ha-422549" cluster
	I1227 20:15:42.378965  337106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:15:42.382020  337106 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:15:42.384910  337106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:15:42.384967  337106 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:15:42.385060  337106 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:15:42.385090  337106 cache.go:65] Caching tarball of preloaded images
	I1227 20:15:42.385178  337106 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:15:42.385188  337106 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:15:42.385327  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:15:42.406731  337106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:15:42.406754  337106 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:15:42.406775  337106 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:15:42.406807  337106 start.go:360] acquireMachinesLock for ha-422549: {Name:mk939e8ee4c2bedc86cc6a99d76298e7b2a26ce2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:15:42.406878  337106 start.go:364] duration metric: took 49.87µs to acquireMachinesLock for "ha-422549"
	I1227 20:15:42.406911  337106 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:15:42.406918  337106 fix.go:54] fixHost starting: 
	I1227 20:15:42.407176  337106 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:15:42.424618  337106 fix.go:112] recreateIfNeeded on ha-422549: state=Stopped err=<nil>
	W1227 20:15:42.424651  337106 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:15:42.429793  337106 out.go:252] * Restarting existing docker container for "ha-422549" ...
	I1227 20:15:42.429887  337106 cli_runner.go:164] Run: docker start ha-422549
	I1227 20:15:42.679169  337106 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:15:42.705015  337106 kic.go:430] container "ha-422549" state is running.
	I1227 20:15:42.705398  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:15:42.726555  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:15:42.726800  337106 machine.go:94] provisionDockerMachine start ...
	I1227 20:15:42.726868  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:42.751689  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:42.752020  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1227 20:15:42.752029  337106 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:15:42.752567  337106 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60238->127.0.0.1:33183: read: connection reset by peer
	I1227 20:15:45.888954  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549
	
	I1227 20:15:45.888987  337106 ubuntu.go:182] provisioning hostname "ha-422549"
	I1227 20:15:45.889052  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:45.906473  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:45.906784  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1227 20:15:45.906800  337106 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549 && echo "ha-422549" | sudo tee /etc/hostname
	I1227 20:15:46.050632  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549
	
	I1227 20:15:46.050726  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:46.069043  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:46.069357  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1227 20:15:46.069378  337106 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:15:46.210430  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:15:46.210454  337106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:15:46.210475  337106 ubuntu.go:190] setting up certificates
	I1227 20:15:46.210485  337106 provision.go:84] configureAuth start
	I1227 20:15:46.210557  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:15:46.227543  337106 provision.go:143] copyHostCerts
	I1227 20:15:46.227593  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:15:46.227625  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:15:46.227646  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:15:46.227726  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:15:46.227825  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:15:46.227847  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:15:46.227858  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:15:46.227890  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:15:46.227942  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:15:46.227963  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:15:46.227975  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:15:46.228004  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:15:46.228059  337106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549 san=[127.0.0.1 192.168.49.2 ha-422549 localhost minikube]
	I1227 20:15:46.477651  337106 provision.go:177] copyRemoteCerts
	I1227 20:15:46.477745  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:15:46.477812  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:46.494398  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:46.592817  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:15:46.592877  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1227 20:15:46.609148  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:15:46.609214  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:15:46.626129  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:15:46.626186  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:15:46.643096  337106 provision.go:87] duration metric: took 432.58782ms to configureAuth
	I1227 20:15:46.643124  337106 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:15:46.643376  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:46.643487  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:46.660667  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:46.661005  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1227 20:15:46.661026  337106 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:15:47.007057  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:15:47.007122  337106 machine.go:97] duration metric: took 4.280312247s to provisionDockerMachine
	I1227 20:15:47.007150  337106 start.go:293] postStartSetup for "ha-422549" (driver="docker")
	I1227 20:15:47.007178  337106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:15:47.007279  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:15:47.007348  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:47.029053  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:47.129052  337106 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:15:47.132168  337106 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:15:47.132192  337106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:15:47.132203  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:15:47.132254  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:15:47.132333  337106 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:15:47.132339  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:15:47.132433  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:15:47.139569  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:15:47.156024  337106 start.go:296] duration metric: took 148.843658ms for postStartSetup
	I1227 20:15:47.156149  337106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:15:47.156211  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:47.173109  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:47.266513  337106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:15:47.270816  337106 fix.go:56] duration metric: took 4.86389233s for fixHost
	I1227 20:15:47.270844  337106 start.go:83] releasing machines lock for "ha-422549", held for 4.863953055s
	I1227 20:15:47.270913  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:15:47.287367  337106 ssh_runner.go:195] Run: cat /version.json
	I1227 20:15:47.287429  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:47.287703  337106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:15:47.287764  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:47.309269  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:47.309529  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:47.405178  337106 ssh_runner.go:195] Run: systemctl --version
	I1227 20:15:47.511199  337106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:15:47.547392  337106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:15:47.551737  337106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:15:47.551827  337106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:15:47.559324  337106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:15:47.559347  337106 start.go:496] detecting cgroup driver to use...
	I1227 20:15:47.559388  337106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:15:47.559434  337106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:15:47.574366  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:15:47.587100  337106 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:15:47.587164  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:15:47.602600  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:15:47.615779  337106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:15:47.738070  337106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:15:47.863690  337106 docker.go:234] disabling docker service ...
	I1227 20:15:47.863793  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:15:47.878841  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:15:47.891780  337106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:15:48.005581  337106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:15:48.146501  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:15:48.159335  337106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:15:48.172971  337106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:15:48.173057  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.182022  337106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:15:48.182123  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.190766  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.199691  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.208613  337106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:15:48.216583  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.225357  337106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.238325  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.247144  337106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:15:48.254972  337106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:15:48.262335  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:15:48.380620  337106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:15:48.551875  337106 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:15:48.551947  337106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:15:48.555685  337106 start.go:574] Will wait 60s for crictl version
	I1227 20:15:48.555757  337106 ssh_runner.go:195] Run: which crictl
	I1227 20:15:48.559221  337106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:15:48.585662  337106 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:15:48.585789  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:15:48.613651  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:15:48.644252  337106 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:15:48.647214  337106 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:15:48.663170  337106 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:15:48.666927  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:15:48.676701  337106 kubeadm.go:884] updating cluster {Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:15:48.676861  337106 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:15:48.676926  337106 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:15:48.713302  337106 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:15:48.713323  337106 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:15:48.713375  337106 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:15:48.738578  337106 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:15:48.738606  337106 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:15:48.738615  337106 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I1227 20:15:48.738716  337106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:15:48.738798  337106 ssh_runner.go:195] Run: crio config
	I1227 20:15:48.806339  337106 cni.go:84] Creating CNI manager for ""
	I1227 20:15:48.806361  337106 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1227 20:15:48.806383  337106 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:15:48.806406  337106 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422549 NodeName:ha-422549 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:15:48.806540  337106 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422549"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:15:48.806566  337106 kube-vip.go:115] generating kube-vip config ...
	I1227 20:15:48.806619  337106 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 20:15:48.818243  337106 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:15:48.818375  337106 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 20:15:48.818447  337106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:15:48.825705  337106 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:15:48.825785  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1227 20:15:48.832852  337106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1227 20:15:48.844713  337106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:15:48.856701  337106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1227 20:15:48.868844  337106 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 20:15:48.880915  337106 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:15:48.884598  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:15:48.893875  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:15:49.019776  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:15:49.036215  337106 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.2
	I1227 20:15:49.036242  337106 certs.go:195] generating shared ca certs ...
	I1227 20:15:49.036258  337106 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:15:49.036390  337106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:15:49.036447  337106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:15:49.036460  337106 certs.go:257] generating profile certs ...
	I1227 20:15:49.036541  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key
	I1227 20:15:49.036611  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.743f7ef3
	I1227 20:15:49.036653  337106 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key
	I1227 20:15:49.036666  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:15:49.036679  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:15:49.036694  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:15:49.036704  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:15:49.036720  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:15:49.036731  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:15:49.036746  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:15:49.036756  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:15:49.036804  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:15:49.036836  337106 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:15:49.036848  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:15:49.036874  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:15:49.036910  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:15:49.036939  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:15:49.037002  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:15:49.037036  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:15:49.037057  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:49.037072  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:15:49.037704  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:15:49.057400  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:15:49.076605  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:15:49.095621  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:15:49.115441  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:15:49.135019  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:15:49.162312  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:15:49.179956  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:15:49.203774  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:15:49.228107  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:15:49.246930  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:15:49.265916  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:15:49.281838  337106 ssh_runner.go:195] Run: openssl version
	I1227 20:15:49.287989  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:49.295912  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:15:49.303435  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:49.307018  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:49.307115  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:49.347922  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:15:49.354929  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:15:49.361715  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:15:49.368688  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:15:49.372719  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:15:49.372798  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:15:49.413917  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:15:49.421060  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:15:49.428016  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:15:49.435273  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:15:49.438964  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:15:49.439075  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:15:49.480693  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:15:49.488361  337106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:15:49.492062  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:15:49.532621  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:15:49.573227  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:15:49.615004  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:15:49.660835  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:15:49.706320  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:15:49.793965  337106 kubeadm.go:401] StartCluster: {Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:15:49.794119  337106 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:15:49.794193  337106 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:15:49.873661  337106 cri.go:96] found id: "acdd287d4087fec2c7c00eb589c13b06231128c1441e2db4a8f74c57600a6e67"
	I1227 20:15:49.873685  337106 cri.go:96] found id: "7c4ac1dbe59ad7d3143dfe74886a6bc3058bfad37ae864b855a6e47c1a4d984e"
	I1227 20:15:49.873690  337106 cri.go:96] found id: "6b0b91d1da0a4c385d0d3110ebc1d18efbc54bab7d6da6bba31c072f2fbd4da9"
	I1227 20:15:49.873694  337106 cri.go:96] found id: "776b31832bd3b44eb905f188f6aa9c0428a287ba7eaeb4ed172dd8bef1b5795b"
	I1227 20:15:49.873697  337106 cri.go:96] found id: "97ce57129ce3bc803fd62d49e1f3f06d06aa64d93e2ef36f372084cbbd21e34a"
	I1227 20:15:49.873717  337106 cri.go:96] found id: ""
	I1227 20:15:49.873771  337106 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:15:49.891661  337106 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:15:49Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:15:49.891749  337106 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:15:49.906600  337106 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:15:49.906624  337106 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:15:49.906703  337106 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:15:49.919028  337106 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:15:49.919479  337106 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-422549" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:15:49.919620  337106 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-272475/kubeconfig needs updating (will repair): [kubeconfig missing "ha-422549" cluster setting kubeconfig missing "ha-422549" context setting]
	I1227 20:15:49.919957  337106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:15:49.920555  337106 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 20:15:49.921302  337106 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 20:15:49.921327  337106 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 20:15:49.921333  337106 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 20:15:49.921364  337106 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1227 20:15:49.921405  337106 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 20:15:49.921411  337106 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 20:15:49.921423  337106 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 20:15:49.921745  337106 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:15:49.936013  337106 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1227 20:15:49.936040  337106 kubeadm.go:602] duration metric: took 29.409884ms to restartPrimaryControlPlane
	I1227 20:15:49.936051  337106 kubeadm.go:403] duration metric: took 142.110676ms to StartCluster
	I1227 20:15:49.936075  337106 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:15:49.936142  337106 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:15:49.937228  337106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:15:49.937930  337106 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:15:49.938100  337106 start.go:242] waiting for startup goroutines ...
	I1227 20:15:49.938130  337106 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:15:49.939423  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:49.942218  337106 out.go:179] * Enabled addons: 
	I1227 20:15:49.945329  337106 addons.go:530] duration metric: took 7.202537ms for enable addons: enabled=[]
	I1227 20:15:49.945417  337106 start.go:247] waiting for cluster config update ...
	I1227 20:15:49.945442  337106 start.go:256] writing updated cluster config ...
	I1227 20:15:49.948818  337106 out.go:203] 
	I1227 20:15:49.952226  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:49.952424  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:15:49.955848  337106 out.go:179] * Starting "ha-422549-m02" control-plane node in "ha-422549" cluster
	I1227 20:15:49.958975  337106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:15:49.962204  337106 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:15:49.965179  337106 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:15:49.965273  337106 cache.go:65] Caching tarball of preloaded images
	I1227 20:15:49.965249  337106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:15:49.965709  337106 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:15:49.965749  337106 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:15:49.965939  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:15:49.990566  337106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:15:49.990585  337106 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:15:49.990599  337106 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:15:49.990629  337106 start.go:360] acquireMachinesLock for ha-422549-m02: {Name:mk8fc7aa5d6c41749cc4b9db094e3fd243d8b868 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:15:49.990677  337106 start.go:364] duration metric: took 33.255µs to acquireMachinesLock for "ha-422549-m02"
	I1227 20:15:49.990697  337106 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:15:49.990704  337106 fix.go:54] fixHost starting: m02
	I1227 20:15:49.990960  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m02 --format={{.State.Status}}
	I1227 20:15:50.012661  337106 fix.go:112] recreateIfNeeded on ha-422549-m02: state=Stopped err=<nil>
	W1227 20:15:50.012689  337106 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:15:50.016334  337106 out.go:252] * Restarting existing docker container for "ha-422549-m02" ...
	I1227 20:15:50.016437  337106 cli_runner.go:164] Run: docker start ha-422549-m02
	I1227 20:15:50.398628  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m02 --format={{.State.Status}}
	I1227 20:15:50.427580  337106 kic.go:430] container "ha-422549-m02" state is running.
	I1227 20:15:50.427943  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:15:50.459424  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:15:50.459657  337106 machine.go:94] provisionDockerMachine start ...
	I1227 20:15:50.459714  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:50.490531  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:50.493631  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1227 20:15:50.493650  337106 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:15:50.494339  337106 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 20:15:53.641274  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m02
	
	I1227 20:15:53.641349  337106 ubuntu.go:182] provisioning hostname "ha-422549-m02"
	I1227 20:15:53.641467  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:53.663080  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:53.663387  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1227 20:15:53.663406  337106 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549-m02 && echo "ha-422549-m02" | sudo tee /etc/hostname
	I1227 20:15:53.819054  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m02
	
	I1227 20:15:53.819139  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:53.847197  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:53.847500  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1227 20:15:53.847516  337106 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:15:53.989824  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:15:53.989849  337106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:15:53.989866  337106 ubuntu.go:190] setting up certificates
	I1227 20:15:53.989878  337106 provision.go:84] configureAuth start
	I1227 20:15:53.989941  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:15:54.009870  337106 provision.go:143] copyHostCerts
	I1227 20:15:54.009915  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:15:54.009950  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:15:54.009964  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:15:54.010041  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:15:54.010125  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:15:54.010148  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:15:54.010153  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:15:54.010182  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:15:54.010267  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:15:54.010289  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:15:54.010297  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:15:54.010323  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:15:54.010374  337106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549-m02 san=[127.0.0.1 192.168.49.3 ha-422549-m02 localhost minikube]
	I1227 20:15:54.260286  337106 provision.go:177] copyRemoteCerts
	I1227 20:15:54.260405  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:15:54.260467  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:54.278663  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:15:54.377066  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:15:54.377172  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:15:54.395067  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:15:54.395180  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 20:15:54.412398  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:15:54.412507  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:15:54.429091  337106 provision.go:87] duration metric: took 439.199295ms to configureAuth
	I1227 20:15:54.429119  337106 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:15:54.429346  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:54.429480  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:54.446402  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:54.446712  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1227 20:15:54.446736  337106 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:15:54.817328  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:15:54.817351  337106 machine.go:97] duration metric: took 4.357685623s to provisionDockerMachine
	I1227 20:15:54.817363  337106 start.go:293] postStartSetup for "ha-422549-m02" (driver="docker")
	I1227 20:15:54.817373  337106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:15:54.817438  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:15:54.817558  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:54.834291  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:15:54.933155  337106 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:15:54.936441  337106 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:15:54.936469  337106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:15:54.936480  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:15:54.936536  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:15:54.936618  337106 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:15:54.936632  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:15:54.936739  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:15:54.944112  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:15:54.961353  337106 start.go:296] duration metric: took 143.973459ms for postStartSetup
	I1227 20:15:54.961439  337106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:15:54.961529  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:54.978679  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:15:55.075001  337106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:15:55.080166  337106 fix.go:56] duration metric: took 5.089454661s for fixHost
	I1227 20:15:55.080193  337106 start.go:83] releasing machines lock for "ha-422549-m02", held for 5.089507139s
	I1227 20:15:55.080267  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:15:55.100982  337106 out.go:179] * Found network options:
	I1227 20:15:55.103953  337106 out.go:179]   - NO_PROXY=192.168.49.2
	W1227 20:15:55.106802  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:15:55.106845  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 20:15:55.106919  337106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:15:55.106964  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:55.107011  337106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:15:55.107066  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:55.130151  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:15:55.137687  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:15:55.324223  337106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:15:55.328436  337106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:15:55.328502  337106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:15:55.336088  337106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:15:55.336120  337106 start.go:496] detecting cgroup driver to use...
	I1227 20:15:55.336165  337106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:15:55.336216  337106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:15:55.350639  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:15:55.363702  337106 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:15:55.363812  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:15:55.380023  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:15:55.396017  337106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:15:55.627299  337106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:15:55.867067  337106 docker.go:234] disabling docker service ...
	I1227 20:15:55.867179  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:15:55.887006  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:15:55.903434  337106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:15:56.147368  337106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:15:56.372701  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:15:56.386071  337106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:15:56.438830  337106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:15:56.438945  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.453154  337106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:15:56.453272  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.469839  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.480255  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.492229  337106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:15:56.504717  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.522023  337106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.536543  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.549900  337106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:15:56.562631  337106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:15:56.570307  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:15:56.790142  337106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:15:57.038862  337106 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:15:57.038970  337106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:15:57.042575  337106 start.go:574] Will wait 60s for crictl version
	I1227 20:15:57.042675  337106 ssh_runner.go:195] Run: which crictl
	I1227 20:15:57.046123  337106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:15:57.079472  337106 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:15:57.079604  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:15:57.111539  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:15:57.144245  337106 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:15:57.147176  337106 out.go:179]   - env NO_PROXY=192.168.49.2
	I1227 20:15:57.150339  337106 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:15:57.166874  337106 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:15:57.170704  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:15:57.180393  337106 mustload.go:66] Loading cluster: ha-422549
	I1227 20:15:57.180638  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:57.180911  337106 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:15:57.198058  337106 host.go:66] Checking if "ha-422549" exists ...
	I1227 20:15:57.198339  337106 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.3
	I1227 20:15:57.198353  337106 certs.go:195] generating shared ca certs ...
	I1227 20:15:57.198367  337106 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:15:57.198490  337106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:15:57.198538  337106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:15:57.198549  337106 certs.go:257] generating profile certs ...
	I1227 20:15:57.198625  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key
	I1227 20:15:57.198688  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.982843aa
	I1227 20:15:57.198735  337106 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key
	I1227 20:15:57.198748  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:15:57.198762  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:15:57.198779  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:15:57.198791  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:15:57.198810  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:15:57.198822  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:15:57.198837  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:15:57.198847  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:15:57.198901  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:15:57.198935  337106 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:15:57.198948  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:15:57.198974  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:15:57.199001  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:15:57.199031  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:15:57.199079  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:15:57.199116  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:15:57.199131  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:15:57.199146  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:57.199227  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:57.217178  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:57.309803  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1227 20:15:57.313760  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1227 20:15:57.321367  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1227 20:15:57.324564  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1227 20:15:57.332196  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1227 20:15:57.335588  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1227 20:15:57.343125  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1227 20:15:57.346654  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1227 20:15:57.354254  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1227 20:15:57.357588  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1227 20:15:57.365565  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1227 20:15:57.369083  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1227 20:15:57.377616  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:15:57.394501  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:15:57.411297  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:15:57.428988  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:15:57.454933  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:15:57.477949  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:15:57.503718  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:15:57.527644  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:15:57.546021  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:15:57.562799  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:15:57.579794  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:15:57.596739  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1227 20:15:57.608968  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1227 20:15:57.621234  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1227 20:15:57.633283  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1227 20:15:57.645247  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1227 20:15:57.656994  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1227 20:15:57.668811  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (728 bytes)
	I1227 20:15:57.680824  337106 ssh_runner.go:195] Run: openssl version
	I1227 20:15:57.687264  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:15:57.694487  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:15:57.701580  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:15:57.705288  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:15:57.705345  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:15:57.746792  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:15:57.754009  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:15:57.760822  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:15:57.767703  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:15:57.771201  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:15:57.771305  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:15:57.813599  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:15:57.821036  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:57.828245  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:15:57.835688  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:57.839528  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:57.839640  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:57.880298  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:15:57.887708  337106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:15:57.891264  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:15:57.931649  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:15:57.972880  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:15:58.015739  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:15:58.057920  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:15:58.099308  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:15:58.140147  337106 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.35.0 crio true true} ...
	I1227 20:15:58.140265  337106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:15:58.140313  337106 kube-vip.go:115] generating kube-vip config ...
	I1227 20:15:58.140373  337106 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 20:15:58.151945  337106 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:15:58.152003  337106 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 20:15:58.152075  337106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:15:58.159193  337106 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:15:58.159305  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1227 20:15:58.166464  337106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 20:15:58.178769  337106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:15:58.190381  337106 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 20:15:58.202642  337106 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:15:58.206198  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:15:58.215567  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:15:58.331455  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:15:58.345573  337106 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:15:58.345907  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:58.350455  337106 out.go:179] * Verifying Kubernetes components...
	I1227 20:15:58.353287  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:15:58.476026  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:15:58.491956  337106 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1227 20:15:58.492036  337106 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1227 20:15:58.492360  337106 node_ready.go:35] waiting up to 6m0s for node "ha-422549-m02" to be "Ready" ...
	W1227 20:16:08.493659  337106 node_ready.go:55] error getting node "ha-422549-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422549-m02": net/http: TLS handshake timeout
	W1227 20:16:13.724508  337106 node_ready.go:57] node "ha-422549-m02" has "Ready":"Unknown" status (will retry)
	I1227 20:16:13.998074  337106 node_ready.go:49] node "ha-422549-m02" is "Ready"
	I1227 20:16:13.998104  337106 node_ready.go:38] duration metric: took 15.505718327s for node "ha-422549-m02" to be "Ready" ...
	I1227 20:16:13.998117  337106 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:16:13.998195  337106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:16:14.018969  337106 api_server.go:72] duration metric: took 15.673348785s to wait for apiserver process to appear ...
	I1227 20:16:14.019000  337106 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:16:14.019022  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:14.028770  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:16:14.028803  337106 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:16:14.519178  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:14.550966  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:16:14.551052  337106 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:16:15.019197  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:15.046385  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:16:15.046479  337106 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:16:15.519851  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:15.557956  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:16:15.558047  337106 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:16:16.019247  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:16.033187  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:16:16.033267  337106 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:16:16.519670  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:16.536800  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1227 20:16:16.539603  337106 api_server.go:141] control plane version: v1.35.0
	I1227 20:16:16.539669  337106 api_server.go:131] duration metric: took 2.52066052s to wait for apiserver health ...
	I1227 20:16:16.539693  337106 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:16:16.570231  337106 system_pods.go:59] 26 kube-system pods found
	I1227 20:16:16.570324  337106 system_pods.go:61] "coredns-7d764666f9-mf5xw" [5a7f58c2-f991-46f0-9ece-9a561d53d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:16.570350  337106 system_pods.go:61] "coredns-7d764666f9-n5d9d" [159febfd-c1e4-4897-a372-59e4a3069914] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:16.570386  337106 system_pods.go:61] "etcd-ha-422549" [8f26f563-e734-4add-aefe-484f0e873a1e] Running
	I1227 20:16:16.570414  337106 system_pods.go:61] "etcd-ha-422549-m02" [5fed7e48-07c4-4a07-b63b-0fccbd196f6f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:16:16.570435  337106 system_pods.go:61] "etcd-ha-422549-m03" [d22f78a1-2f4c-41e6-b65a-bf7108686c71] Running
	I1227 20:16:16.570460  337106 system_pods.go:61] "kindnet-28svl" [1494f795-941f-418e-8090-098225eb9c6a] Running
	I1227 20:16:16.570493  337106 system_pods.go:61] "kindnet-4hl7v" [ea2cc8a1-df16-440c-a093-a5d915b249b4] Running
	I1227 20:16:16.570521  337106 system_pods.go:61] "kindnet-5wczs" [df3d7298-4140-464f-a6e8-c614e1683488] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:16:16.570663  337106 system_pods.go:61] "kindnet-qkqmv" [66d834ae-af1b-456d-ae48-8a0d6608f961] Running
	I1227 20:16:16.570696  337106 system_pods.go:61] "kube-apiserver-ha-422549" [14f8e794-2ba7-477d-806b-03dd5a33d868] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:16:16.570721  337106 system_pods.go:61] "kube-apiserver-ha-422549-m02" [a4b97cc6-26ef-4d46-9ef9-bdee08eb89d6] Running
	I1227 20:16:16.570746  337106 system_pods.go:61] "kube-apiserver-ha-422549-m03" [71f23288-3e33-4bc8-9182-08c190ae026f] Running
	I1227 20:16:16.570787  337106 system_pods.go:61] "kube-controller-manager-ha-422549" [b69af60f-4eac-4e85-aa81-66b7616a46f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:16.570820  337106 system_pods.go:61] "kube-controller-manager-ha-422549-m02" [07c0e68f-76e5-4cee-92a2-05dd2fb4c3e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:16.570843  337106 system_pods.go:61] "kube-controller-manager-ha-422549-m03" [af291694-2986-455c-8588-c2879d10ff3b] Running
	I1227 20:16:16.570865  337106 system_pods.go:61] "kube-proxy-cg4z5" [42f74e61-eb67-4d02-8f08-f77f7163f5fc] Running
	I1227 20:16:16.570897  337106 system_pods.go:61] "kube-proxy-kscg6" [baa716d5-546a-4922-ba51-fe1116e36c75] Running
	I1227 20:16:16.570923  337106 system_pods.go:61] "kube-proxy-mhmmn" [d69029af-1fc4-4a31-913e-92e1231e845a] Running
	I1227 20:16:16.570948  337106 system_pods.go:61] "kube-proxy-nqr7h" [d0fc3ef5-765a-4376-94e6-42237908d3fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:16:16.570969  337106 system_pods.go:61] "kube-scheduler-ha-422549" [549e105d-d2e7-42b6-ae48-098d590e7b1d] Running
	I1227 20:16:16.571002  337106 system_pods.go:61] "kube-scheduler-ha-422549-m02" [db9187da-87a8-4b73-baea-76f3d9ef35c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:16:16.571026  337106 system_pods.go:61] "kube-scheduler-ha-422549-m03" [2a6b70b3-5303-404f-8b1d-1a65b9b81555] Running
	I1227 20:16:16.571044  337106 system_pods.go:61] "kube-vip-ha-422549" [32d647ce-90ed-4f56-b4c8-7ed445019d88] Running
	I1227 20:16:16.571067  337106 system_pods.go:61] "kube-vip-ha-422549-m02" [ddde9374-24b7-498d-b829-6902c612b272] Running
	I1227 20:16:16.571109  337106 system_pods.go:61] "kube-vip-ha-422549-m03" [39a60c56-1bf0-4232-9af0-f55e0c66a33d] Running
	I1227 20:16:16.571136  337106 system_pods.go:61] "storage-provisioner" [0d645eab-223f-4dd6-9518-6ab4a21d4c09] Running
	I1227 20:16:16.571156  337106 system_pods.go:74] duration metric: took 31.434553ms to wait for pod list to return data ...
	I1227 20:16:16.571179  337106 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:16:16.590199  337106 default_sa.go:45] found service account: "default"
	I1227 20:16:16.590265  337106 default_sa.go:55] duration metric: took 19.064027ms for default service account to be created ...
	I1227 20:16:16.590290  337106 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:16:16.623079  337106 system_pods.go:86] 26 kube-system pods found
	I1227 20:16:16.623169  337106 system_pods.go:89] "coredns-7d764666f9-mf5xw" [5a7f58c2-f991-46f0-9ece-9a561d53d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:16.623195  337106 system_pods.go:89] "coredns-7d764666f9-n5d9d" [159febfd-c1e4-4897-a372-59e4a3069914] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:16.623234  337106 system_pods.go:89] "etcd-ha-422549" [8f26f563-e734-4add-aefe-484f0e873a1e] Running
	I1227 20:16:16.623263  337106 system_pods.go:89] "etcd-ha-422549-m02" [5fed7e48-07c4-4a07-b63b-0fccbd196f6f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:16:16.623283  337106 system_pods.go:89] "etcd-ha-422549-m03" [d22f78a1-2f4c-41e6-b65a-bf7108686c71] Running
	I1227 20:16:16.623303  337106 system_pods.go:89] "kindnet-28svl" [1494f795-941f-418e-8090-098225eb9c6a] Running
	I1227 20:16:16.623335  337106 system_pods.go:89] "kindnet-4hl7v" [ea2cc8a1-df16-440c-a093-a5d915b249b4] Running
	I1227 20:16:16.623362  337106 system_pods.go:89] "kindnet-5wczs" [df3d7298-4140-464f-a6e8-c614e1683488] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:16:16.623385  337106 system_pods.go:89] "kindnet-qkqmv" [66d834ae-af1b-456d-ae48-8a0d6608f961] Running
	I1227 20:16:16.623411  337106 system_pods.go:89] "kube-apiserver-ha-422549" [14f8e794-2ba7-477d-806b-03dd5a33d868] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:16:16.623447  337106 system_pods.go:89] "kube-apiserver-ha-422549-m02" [a4b97cc6-26ef-4d46-9ef9-bdee08eb89d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:16:16.623475  337106 system_pods.go:89] "kube-apiserver-ha-422549-m03" [71f23288-3e33-4bc8-9182-08c190ae026f] Running
	I1227 20:16:16.623501  337106 system_pods.go:89] "kube-controller-manager-ha-422549" [b69af60f-4eac-4e85-aa81-66b7616a46f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:16.623525  337106 system_pods.go:89] "kube-controller-manager-ha-422549-m02" [07c0e68f-76e5-4cee-92a2-05dd2fb4c3e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:16.623557  337106 system_pods.go:89] "kube-controller-manager-ha-422549-m03" [af291694-2986-455c-8588-c2879d10ff3b] Running
	I1227 20:16:16.623583  337106 system_pods.go:89] "kube-proxy-cg4z5" [42f74e61-eb67-4d02-8f08-f77f7163f5fc] Running
	I1227 20:16:16.623607  337106 system_pods.go:89] "kube-proxy-kscg6" [baa716d5-546a-4922-ba51-fe1116e36c75] Running
	I1227 20:16:16.623632  337106 system_pods.go:89] "kube-proxy-mhmmn" [d69029af-1fc4-4a31-913e-92e1231e845a] Running
	I1227 20:16:16.623664  337106 system_pods.go:89] "kube-proxy-nqr7h" [d0fc3ef5-765a-4376-94e6-42237908d3fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:16:16.623690  337106 system_pods.go:89] "kube-scheduler-ha-422549" [549e105d-d2e7-42b6-ae48-098d590e7b1d] Running
	I1227 20:16:16.623713  337106 system_pods.go:89] "kube-scheduler-ha-422549-m02" [db9187da-87a8-4b73-baea-76f3d9ef35c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:16:16.623737  337106 system_pods.go:89] "kube-scheduler-ha-422549-m03" [2a6b70b3-5303-404f-8b1d-1a65b9b81555] Running
	I1227 20:16:16.623769  337106 system_pods.go:89] "kube-vip-ha-422549" [32d647ce-90ed-4f56-b4c8-7ed445019d88] Running
	I1227 20:16:16.623794  337106 system_pods.go:89] "kube-vip-ha-422549-m02" [ddde9374-24b7-498d-b829-6902c612b272] Running
	I1227 20:16:16.623818  337106 system_pods.go:89] "kube-vip-ha-422549-m03" [39a60c56-1bf0-4232-9af0-f55e0c66a33d] Running
	I1227 20:16:16.623842  337106 system_pods.go:89] "storage-provisioner" [0d645eab-223f-4dd6-9518-6ab4a21d4c09] Running
	I1227 20:16:16.623877  337106 system_pods.go:126] duration metric: took 33.567641ms to wait for k8s-apps to be running ...
	I1227 20:16:16.623905  337106 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:16:16.623994  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:16:16.670311  337106 system_svc.go:56] duration metric: took 46.39668ms WaitForService to wait for kubelet
	I1227 20:16:16.670384  337106 kubeadm.go:587] duration metric: took 18.324769156s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:16:16.670417  337106 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:16:16.708894  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:16.708992  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:16.709018  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:16.709039  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:16.709068  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:16.709094  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:16.709113  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:16.709132  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:16.709151  337106 node_conditions.go:105] duration metric: took 38.715442ms to run NodePressure ...
	I1227 20:16:16.709184  337106 start.go:242] waiting for startup goroutines ...
	I1227 20:16:16.709228  337106 start.go:256] writing updated cluster config ...
	I1227 20:16:16.713916  337106 out.go:203] 
	I1227 20:16:16.723292  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:16.723425  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:16.727142  337106 out.go:179] * Starting "ha-422549-m03" control-plane node in "ha-422549" cluster
	I1227 20:16:16.732478  337106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:16:16.735844  337106 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:16:16.739409  337106 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:16:16.739458  337106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:16:16.739659  337106 cache.go:65] Caching tarball of preloaded images
	I1227 20:16:16.739753  337106 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:16:16.739768  337106 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:16:16.739908  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:16.767918  337106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:16:16.767942  337106 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:16:16.767957  337106 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:16:16.767980  337106 start.go:360] acquireMachinesLock for ha-422549-m03: {Name:mkf062d56fcf026ae5cb73bd2d2d3016f0f6c481 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:16:16.768043  337106 start.go:364] duration metric: took 41.697µs to acquireMachinesLock for "ha-422549-m03"
	I1227 20:16:16.768068  337106 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:16:16.768074  337106 fix.go:54] fixHost starting: m03
	I1227 20:16:16.768352  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m03 --format={{.State.Status}}
	I1227 20:16:16.790621  337106 fix.go:112] recreateIfNeeded on ha-422549-m03: state=Stopped err=<nil>
	W1227 20:16:16.790653  337106 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:16:16.794891  337106 out.go:252] * Restarting existing docker container for "ha-422549-m03" ...
	I1227 20:16:16.794974  337106 cli_runner.go:164] Run: docker start ha-422549-m03
	I1227 20:16:17.149956  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m03 --format={{.State.Status}}
	I1227 20:16:17.174958  337106 kic.go:430] container "ha-422549-m03" state is running.
	I1227 20:16:17.175307  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m03
	I1227 20:16:17.213633  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:17.213863  337106 machine.go:94] provisionDockerMachine start ...
	I1227 20:16:17.213929  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:17.241742  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:17.242041  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1227 20:16:17.242056  337106 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:16:17.242635  337106 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 20:16:20.405227  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m03
	
	I1227 20:16:20.405265  337106 ubuntu.go:182] provisioning hostname "ha-422549-m03"
	I1227 20:16:20.405335  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:20.447382  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:20.447685  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1227 20:16:20.447702  337106 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549-m03 && echo "ha-422549-m03" | sudo tee /etc/hostname
	I1227 20:16:20.641581  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m03
	
	I1227 20:16:20.641669  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:20.671096  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:20.671417  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1227 20:16:20.671491  337106 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:16:20.825909  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:16:20.825934  337106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:16:20.825963  337106 ubuntu.go:190] setting up certificates
	I1227 20:16:20.825973  337106 provision.go:84] configureAuth start
	I1227 20:16:20.826043  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m03
	I1227 20:16:20.848683  337106 provision.go:143] copyHostCerts
	I1227 20:16:20.848722  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:16:20.848751  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:16:20.848757  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:16:20.848829  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:16:20.848936  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:16:20.848954  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:16:20.848959  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:16:20.848987  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:16:20.849035  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:16:20.849051  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:16:20.849055  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:16:20.849079  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:16:20.849139  337106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549-m03 san=[127.0.0.1 192.168.49.4 ha-422549-m03 localhost minikube]
	I1227 20:16:20.958713  337106 provision.go:177] copyRemoteCerts
	I1227 20:16:20.958777  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:16:20.958919  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:20.978456  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m03/id_rsa Username:docker}
	I1227 20:16:21.097778  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:16:21.097855  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:16:21.118223  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:16:21.118280  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 20:16:21.171526  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:16:21.171643  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:16:21.238272  337106 provision.go:87] duration metric: took 412.285774ms to configureAuth
	I1227 20:16:21.238317  337106 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:16:21.238586  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:21.238711  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:21.261112  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:21.261428  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1227 20:16:21.261479  337106 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:16:22.736503  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:16:22.736545  337106 machine.go:97] duration metric: took 5.522665605s to provisionDockerMachine
	I1227 20:16:22.736559  337106 start.go:293] postStartSetup for "ha-422549-m03" (driver="docker")
	I1227 20:16:22.736569  337106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:16:22.736631  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:16:22.736681  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:22.757560  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m03/id_rsa Username:docker}
	I1227 20:16:22.872943  337106 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:16:22.877107  337106 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:16:22.877150  337106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:16:22.877162  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:16:22.877224  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:16:22.877310  337106 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:16:22.877323  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:16:22.877568  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:16:22.887508  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:16:22.935543  337106 start.go:296] duration metric: took 198.968452ms for postStartSetup
	I1227 20:16:22.935675  337106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:16:22.935751  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:22.962394  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m03/id_rsa Username:docker}
	I1227 20:16:23.086315  337106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:16:23.098060  337106 fix.go:56] duration metric: took 6.329978316s for fixHost
	I1227 20:16:23.098095  337106 start.go:83] releasing machines lock for "ha-422549-m03", held for 6.330038441s
	I1227 20:16:23.098169  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m03
	I1227 20:16:23.127385  337106 out.go:179] * Found network options:
	I1227 20:16:23.130521  337106 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1227 20:16:23.133556  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:23.133603  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:23.133636  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:23.133648  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 20:16:23.133723  337106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:16:23.133754  337106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:16:23.133766  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:23.133843  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:23.174788  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m03/id_rsa Username:docker}
	I1227 20:16:23.176337  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m03/id_rsa Username:docker}
	I1227 20:16:23.532310  337106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:16:23.539423  337106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:16:23.539508  337106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:16:23.547781  337106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:16:23.547805  337106 start.go:496] detecting cgroup driver to use...
	I1227 20:16:23.547836  337106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:16:23.547889  337106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:16:23.564242  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:16:23.579653  337106 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:16:23.579767  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:16:23.598176  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:16:23.613182  337106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:16:23.877595  337106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:16:24.169571  337106 docker.go:234] disabling docker service ...
	I1227 20:16:24.169685  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:16:24.197205  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:16:24.211488  337106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:16:24.466324  337106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:16:24.716660  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:16:24.734029  337106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:16:24.758554  337106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:16:24.758647  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.777034  337106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:16:24.777106  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.791147  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.805710  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.818822  337106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:16:24.828018  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.843848  337106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.852557  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.865822  337106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:16:24.881844  337106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:16:24.890467  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:25.116336  337106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:16:26.436202  337106 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.319834137s)
	I1227 20:16:26.436227  337106 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:16:26.436285  337106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:16:26.440409  337106 start.go:574] Will wait 60s for crictl version
	I1227 20:16:26.440474  337106 ssh_runner.go:195] Run: which crictl
	I1227 20:16:26.444800  337106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:16:26.475048  337106 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:16:26.475137  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:16:26.509827  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:16:26.549254  337106 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:16:26.552189  337106 out.go:179]   - env NO_PROXY=192.168.49.2
	I1227 20:16:26.555166  337106 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1227 20:16:26.558176  337106 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:16:26.575734  337106 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:16:26.580184  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:16:26.590410  337106 mustload.go:66] Loading cluster: ha-422549
	I1227 20:16:26.590667  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:26.590918  337106 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:16:26.608326  337106 host.go:66] Checking if "ha-422549" exists ...
	I1227 20:16:26.608672  337106 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.4
	I1227 20:16:26.608684  337106 certs.go:195] generating shared ca certs ...
	I1227 20:16:26.608708  337106 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:16:26.608822  337106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:16:26.608870  337106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:16:26.608877  337106 certs.go:257] generating profile certs ...
	I1227 20:16:26.608966  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key
	I1227 20:16:26.609032  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.d8cf7377
	I1227 20:16:26.609078  337106 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key
	I1227 20:16:26.609087  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:16:26.609099  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:16:26.609109  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:16:26.609121  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:16:26.609131  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:16:26.609142  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:16:26.609153  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:16:26.609163  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:16:26.609238  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:16:26.609270  337106 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:16:26.609278  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:16:26.609540  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:16:26.609594  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:16:26.609622  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:16:26.609673  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:16:26.609705  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:26.609718  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:16:26.609729  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:16:26.609784  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:16:26.627281  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:16:26.717750  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1227 20:16:26.722194  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1227 20:16:26.732379  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1227 20:16:26.736107  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1227 20:16:26.744795  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1227 20:16:26.748608  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1227 20:16:26.757298  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1227 20:16:26.760963  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1227 20:16:26.770282  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1227 20:16:26.774405  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1227 20:16:26.782912  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1227 20:16:26.787280  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1227 20:16:26.796054  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:16:26.815746  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:16:26.833735  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:16:26.852956  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:16:26.873558  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:16:26.893781  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:16:26.912114  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:16:26.930067  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:16:26.954144  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:16:26.992095  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:16:27.032398  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:16:27.058957  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1227 20:16:27.082646  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1227 20:16:27.099055  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1227 20:16:27.114942  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1227 20:16:27.128524  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1227 20:16:27.143949  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1227 20:16:27.166895  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (728 bytes)
	I1227 20:16:27.189731  337106 ssh_runner.go:195] Run: openssl version
	I1227 20:16:27.199330  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:27.207176  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:16:27.215001  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:27.218816  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:27.218944  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:27.262656  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:16:27.270122  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:16:27.278066  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:16:27.286224  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:16:27.290216  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:16:27.290299  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:16:27.331583  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:16:27.339149  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:16:27.347443  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:16:27.354941  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:16:27.358541  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:16:27.358644  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:16:27.401369  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:16:27.408555  337106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:16:27.412327  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:16:27.452918  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:16:27.493668  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:16:27.534423  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:16:27.575645  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:16:27.617601  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:16:27.658239  337106 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.35.0 crio true true} ...
	I1227 20:16:27.658389  337106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:16:27.658424  337106 kube-vip.go:115] generating kube-vip config ...
	I1227 20:16:27.658480  337106 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 20:16:27.670482  337106 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:16:27.670542  337106 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 20:16:27.670611  337106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:16:27.678382  337106 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:16:27.678493  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1227 20:16:27.688057  337106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 20:16:27.702120  337106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:16:27.721182  337106 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 20:16:27.736629  337106 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:16:27.740129  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:16:27.750576  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:27.920085  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:16:27.936290  337106 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:16:27.936639  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:27.941595  337106 out.go:179] * Verifying Kubernetes components...
	I1227 20:16:27.944502  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:28.098929  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:16:28.115947  337106 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1227 20:16:28.116063  337106 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1227 20:16:28.116301  337106 node_ready.go:35] waiting up to 6m0s for node "ha-422549-m03" to be "Ready" ...
	W1227 20:16:30.121347  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	W1227 20:16:32.620007  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	W1227 20:16:34.620221  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	W1227 20:16:36.620631  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	W1227 20:16:38.620914  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	W1227 20:16:41.119914  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	I1227 20:16:42.138199  337106 node_ready.go:49] node "ha-422549-m03" is "Ready"
	I1227 20:16:42.138234  337106 node_ready.go:38] duration metric: took 14.021894093s for node "ha-422549-m03" to be "Ready" ...
	I1227 20:16:42.138250  337106 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:16:42.138320  337106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:16:42.201875  337106 api_server.go:72] duration metric: took 14.265538166s to wait for apiserver process to appear ...
	I1227 20:16:42.201905  337106 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:16:42.201928  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:42.211305  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1227 20:16:42.217811  337106 api_server.go:141] control plane version: v1.35.0
	I1227 20:16:42.217842  337106 api_server.go:131] duration metric: took 15.928834ms to wait for apiserver health ...
	I1227 20:16:42.217852  337106 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:16:42.235518  337106 system_pods.go:59] 26 kube-system pods found
	I1227 20:16:42.235637  337106 system_pods.go:61] "coredns-7d764666f9-mf5xw" [5a7f58c2-f991-46f0-9ece-9a561d53d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:42.235688  337106 system_pods.go:61] "coredns-7d764666f9-n5d9d" [159febfd-c1e4-4897-a372-59e4a3069914] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:42.235725  337106 system_pods.go:61] "etcd-ha-422549" [8f26f563-e734-4add-aefe-484f0e873a1e] Running
	I1227 20:16:42.235747  337106 system_pods.go:61] "etcd-ha-422549-m02" [5fed7e48-07c4-4a07-b63b-0fccbd196f6f] Running
	I1227 20:16:42.235772  337106 system_pods.go:61] "etcd-ha-422549-m03" [d22f78a1-2f4c-41e6-b65a-bf7108686c71] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:16:42.235810  337106 system_pods.go:61] "kindnet-28svl" [1494f795-941f-418e-8090-098225eb9c6a] Running
	I1227 20:16:42.235843  337106 system_pods.go:61] "kindnet-4hl7v" [ea2cc8a1-df16-440c-a093-a5d915b249b4] Running
	I1227 20:16:42.235869  337106 system_pods.go:61] "kindnet-5wczs" [df3d7298-4140-464f-a6e8-c614e1683488] Running
	I1227 20:16:42.235899  337106 system_pods.go:61] "kindnet-qkqmv" [66d834ae-af1b-456d-ae48-8a0d6608f961] Running
	I1227 20:16:42.235929  337106 system_pods.go:61] "kube-apiserver-ha-422549" [14f8e794-2ba7-477d-806b-03dd5a33d868] Running
	I1227 20:16:42.235961  337106 system_pods.go:61] "kube-apiserver-ha-422549-m02" [a4b97cc6-26ef-4d46-9ef9-bdee08eb89d6] Running
	I1227 20:16:42.235997  337106 system_pods.go:61] "kube-apiserver-ha-422549-m03" [71f23288-3e33-4bc8-9182-08c190ae026f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:16:42.236045  337106 system_pods.go:61] "kube-controller-manager-ha-422549" [b69af60f-4eac-4e85-aa81-66b7616a46f6] Running
	I1227 20:16:42.236083  337106 system_pods.go:61] "kube-controller-manager-ha-422549-m02" [07c0e68f-76e5-4cee-92a2-05dd2fb4c3e2] Running
	I1227 20:16:42.236112  337106 system_pods.go:61] "kube-controller-manager-ha-422549-m03" [af291694-2986-455c-8588-c2879d10ff3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:42.236140  337106 system_pods.go:61] "kube-proxy-cg4z5" [42f74e61-eb67-4d02-8f08-f77f7163f5fc] Running
	I1227 20:16:42.236179  337106 system_pods.go:61] "kube-proxy-kscg6" [baa716d5-546a-4922-ba51-fe1116e36c75] Running
	I1227 20:16:42.236206  337106 system_pods.go:61] "kube-proxy-mhmmn" [d69029af-1fc4-4a31-913e-92e1231e845a] Running
	I1227 20:16:42.236231  337106 system_pods.go:61] "kube-proxy-nqr7h" [d0fc3ef5-765a-4376-94e6-42237908d3fd] Running
	I1227 20:16:42.236262  337106 system_pods.go:61] "kube-scheduler-ha-422549" [549e105d-d2e7-42b6-ae48-098d590e7b1d] Running
	I1227 20:16:42.236297  337106 system_pods.go:61] "kube-scheduler-ha-422549-m02" [db9187da-87a8-4b73-baea-76f3d9ef35c7] Running
	I1227 20:16:42.236326  337106 system_pods.go:61] "kube-scheduler-ha-422549-m03" [2a6b70b3-5303-404f-8b1d-1a65b9b81555] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:16:42.236352  337106 system_pods.go:61] "kube-vip-ha-422549" [32d647ce-90ed-4f56-b4c8-7ed445019d88] Running
	I1227 20:16:42.236391  337106 system_pods.go:61] "kube-vip-ha-422549-m02" [ddde9374-24b7-498d-b829-6902c612b272] Running
	I1227 20:16:42.236414  337106 system_pods.go:61] "kube-vip-ha-422549-m03" [39a60c56-1bf0-4232-9af0-f55e0c66a33d] Running
	I1227 20:16:42.236441  337106 system_pods.go:61] "storage-provisioner" [0d645eab-223f-4dd6-9518-6ab4a21d4c09] Running
	I1227 20:16:42.236483  337106 system_pods.go:74] duration metric: took 18.617239ms to wait for pod list to return data ...
	I1227 20:16:42.236522  337106 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:16:42.247926  337106 default_sa.go:45] found service account: "default"
	I1227 20:16:42.248004  337106 default_sa.go:55] duration metric: took 11.459641ms for default service account to be created ...
	I1227 20:16:42.248030  337106 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:16:42.261989  337106 system_pods.go:86] 26 kube-system pods found
	I1227 20:16:42.262126  337106 system_pods.go:89] "coredns-7d764666f9-mf5xw" [5a7f58c2-f991-46f0-9ece-9a561d53d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:42.262177  337106 system_pods.go:89] "coredns-7d764666f9-n5d9d" [159febfd-c1e4-4897-a372-59e4a3069914] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:42.262207  337106 system_pods.go:89] "etcd-ha-422549" [8f26f563-e734-4add-aefe-484f0e873a1e] Running
	I1227 20:16:42.262236  337106 system_pods.go:89] "etcd-ha-422549-m02" [5fed7e48-07c4-4a07-b63b-0fccbd196f6f] Running
	I1227 20:16:42.262283  337106 system_pods.go:89] "etcd-ha-422549-m03" [d22f78a1-2f4c-41e6-b65a-bf7108686c71] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:16:42.262312  337106 system_pods.go:89] "kindnet-28svl" [1494f795-941f-418e-8090-098225eb9c6a] Running
	I1227 20:16:42.262338  337106 system_pods.go:89] "kindnet-4hl7v" [ea2cc8a1-df16-440c-a093-a5d915b249b4] Running
	I1227 20:16:42.262359  337106 system_pods.go:89] "kindnet-5wczs" [df3d7298-4140-464f-a6e8-c614e1683488] Running
	I1227 20:16:42.262394  337106 system_pods.go:89] "kindnet-qkqmv" [66d834ae-af1b-456d-ae48-8a0d6608f961] Running
	I1227 20:16:42.262426  337106 system_pods.go:89] "kube-apiserver-ha-422549" [14f8e794-2ba7-477d-806b-03dd5a33d868] Running
	I1227 20:16:42.262449  337106 system_pods.go:89] "kube-apiserver-ha-422549-m02" [a4b97cc6-26ef-4d46-9ef9-bdee08eb89d6] Running
	I1227 20:16:42.262479  337106 system_pods.go:89] "kube-apiserver-ha-422549-m03" [71f23288-3e33-4bc8-9182-08c190ae026f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:16:42.262522  337106 system_pods.go:89] "kube-controller-manager-ha-422549" [b69af60f-4eac-4e85-aa81-66b7616a46f6] Running
	I1227 20:16:42.262568  337106 system_pods.go:89] "kube-controller-manager-ha-422549-m02" [07c0e68f-76e5-4cee-92a2-05dd2fb4c3e2] Running
	I1227 20:16:42.262604  337106 system_pods.go:89] "kube-controller-manager-ha-422549-m03" [af291694-2986-455c-8588-c2879d10ff3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:42.262654  337106 system_pods.go:89] "kube-proxy-cg4z5" [42f74e61-eb67-4d02-8f08-f77f7163f5fc] Running
	I1227 20:16:42.262691  337106 system_pods.go:89] "kube-proxy-kscg6" [baa716d5-546a-4922-ba51-fe1116e36c75] Running
	I1227 20:16:42.262719  337106 system_pods.go:89] "kube-proxy-mhmmn" [d69029af-1fc4-4a31-913e-92e1231e845a] Running
	I1227 20:16:42.262764  337106 system_pods.go:89] "kube-proxy-nqr7h" [d0fc3ef5-765a-4376-94e6-42237908d3fd] Running
	I1227 20:16:42.262793  337106 system_pods.go:89] "kube-scheduler-ha-422549" [549e105d-d2e7-42b6-ae48-098d590e7b1d] Running
	I1227 20:16:42.262821  337106 system_pods.go:89] "kube-scheduler-ha-422549-m02" [db9187da-87a8-4b73-baea-76f3d9ef35c7] Running
	I1227 20:16:42.262867  337106 system_pods.go:89] "kube-scheduler-ha-422549-m03" [2a6b70b3-5303-404f-8b1d-1a65b9b81555] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:16:42.262896  337106 system_pods.go:89] "kube-vip-ha-422549" [32d647ce-90ed-4f56-b4c8-7ed445019d88] Running
	I1227 20:16:42.262923  337106 system_pods.go:89] "kube-vip-ha-422549-m02" [ddde9374-24b7-498d-b829-6902c612b272] Running
	I1227 20:16:42.262973  337106 system_pods.go:89] "kube-vip-ha-422549-m03" [39a60c56-1bf0-4232-9af0-f55e0c66a33d] Running
	I1227 20:16:42.263009  337106 system_pods.go:89] "storage-provisioner" [0d645eab-223f-4dd6-9518-6ab4a21d4c09] Running
	I1227 20:16:42.263038  337106 system_pods.go:126] duration metric: took 14.987495ms to wait for k8s-apps to be running ...
	I1227 20:16:42.263064  337106 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:16:42.263186  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:16:42.329952  337106 system_svc.go:56] duration metric: took 66.879518ms WaitForService to wait for kubelet
	I1227 20:16:42.330045  337106 kubeadm.go:587] duration metric: took 14.393713186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:16:42.330082  337106 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:16:42.334874  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:42.334956  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:42.334985  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:42.335008  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:42.335041  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:42.335069  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:42.335090  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:42.335112  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:42.335144  337106 node_conditions.go:105] duration metric: took 5.018461ms to run NodePressure ...
	I1227 20:16:42.335178  337106 start.go:242] waiting for startup goroutines ...
	I1227 20:16:42.335217  337106 start.go:256] writing updated cluster config ...
	I1227 20:16:42.338858  337106 out.go:203] 
	I1227 20:16:42.342208  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:42.342412  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:42.346339  337106 out.go:179] * Starting "ha-422549-m04" worker node in "ha-422549" cluster
	I1227 20:16:42.350180  337106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:16:42.353431  337106 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:16:42.356594  337106 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:16:42.356748  337106 cache.go:65] Caching tarball of preloaded images
	I1227 20:16:42.356702  337106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:16:42.357174  337106 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:16:42.357212  337106 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:16:42.357376  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:42.393103  337106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:16:42.393129  337106 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:16:42.393143  337106 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:16:42.393176  337106 start.go:360] acquireMachinesLock for ha-422549-m04: {Name:mk6b025464d8c3992b9046b379a06dcb477a1541 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:16:42.393245  337106 start.go:364] duration metric: took 45.324µs to acquireMachinesLock for "ha-422549-m04"
	I1227 20:16:42.393264  337106 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:16:42.393270  337106 fix.go:54] fixHost starting: m04
	I1227 20:16:42.393757  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m04 --format={{.State.Status}}
	I1227 20:16:42.411553  337106 fix.go:112] recreateIfNeeded on ha-422549-m04: state=Stopped err=<nil>
	W1227 20:16:42.411578  337106 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:16:42.414835  337106 out.go:252] * Restarting existing docker container for "ha-422549-m04" ...
	I1227 20:16:42.414929  337106 cli_runner.go:164] Run: docker start ha-422549-m04
	I1227 20:16:42.767967  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m04 --format={{.State.Status}}
	I1227 20:16:42.792044  337106 kic.go:430] container "ha-422549-m04" state is running.
	I1227 20:16:42.792404  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m04
	I1227 20:16:42.827351  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:42.827599  337106 machine.go:94] provisionDockerMachine start ...
	I1227 20:16:42.827669  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:42.865289  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:42.865636  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 20:16:42.865647  337106 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:16:42.866300  337106 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43686->127.0.0.1:33198: read: connection reset by peer
	I1227 20:16:46.033368  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m04
	
	I1227 20:16:46.033393  337106 ubuntu.go:182] provisioning hostname "ha-422549-m04"
	I1227 20:16:46.033521  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:46.061318  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:46.061712  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 20:16:46.061729  337106 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549-m04 && echo "ha-422549-m04" | sudo tee /etc/hostname
	I1227 20:16:46.247170  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m04
	
	I1227 20:16:46.247258  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:46.267833  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:46.268212  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 20:16:46.268238  337106 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:16:46.421793  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:16:46.421817  337106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:16:46.421834  337106 ubuntu.go:190] setting up certificates
	I1227 20:16:46.421844  337106 provision.go:84] configureAuth start
	I1227 20:16:46.421907  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m04
	I1227 20:16:46.450717  337106 provision.go:143] copyHostCerts
	I1227 20:16:46.450775  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:16:46.450808  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:16:46.450827  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:16:46.450912  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:16:46.450998  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:16:46.451024  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:16:46.451029  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:16:46.451060  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:16:46.451106  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:16:46.451128  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:16:46.451133  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:16:46.451165  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:16:46.451217  337106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549-m04 san=[127.0.0.1 192.168.49.5 ha-422549-m04 localhost minikube]
	I1227 20:16:46.849291  337106 provision.go:177] copyRemoteCerts
	I1227 20:16:46.849383  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:16:46.849466  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:46.871414  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m04/id_rsa Username:docker}
	I1227 20:16:46.969387  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:16:46.969501  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:16:46.998452  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:16:46.998518  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 20:16:47.021097  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:16:47.021160  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:16:47.040293  337106 provision.go:87] duration metric: took 618.436373ms to configureAuth
	I1227 20:16:47.040318  337106 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:16:47.040553  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:47.040650  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:47.060413  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:47.060713  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 20:16:47.060726  337106 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:16:47.416575  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:16:47.416595  337106 machine.go:97] duration metric: took 4.588981536s to provisionDockerMachine
	I1227 20:16:47.416607  337106 start.go:293] postStartSetup for "ha-422549-m04" (driver="docker")
	I1227 20:16:47.416618  337106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:16:47.416709  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:16:47.416753  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:47.436074  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m04/id_rsa Username:docker}
	I1227 20:16:47.541369  337106 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:16:47.545584  337106 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:16:47.545615  337106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:16:47.545627  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:16:47.545689  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:16:47.545788  337106 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:16:47.545802  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:16:47.545901  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:16:47.553680  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:16:47.574171  337106 start.go:296] duration metric: took 157.548886ms for postStartSetup
	I1227 20:16:47.574295  337106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:16:47.574343  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:47.591734  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m04/id_rsa Username:docker}
	I1227 20:16:47.691874  337106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:16:47.696839  337106 fix.go:56] duration metric: took 5.303562652s for fixHost
	I1227 20:16:47.696874  337106 start.go:83] releasing machines lock for "ha-422549-m04", held for 5.303620217s
	I1227 20:16:47.696941  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m04
	I1227 20:16:47.722974  337106 out.go:179] * Found network options:
	I1227 20:16:47.725907  337106 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1227 20:16:47.728701  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:47.728735  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:47.728747  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:47.728789  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:47.728805  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:47.728815  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 20:16:47.728903  337106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:16:47.728946  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:47.729221  337106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:16:47.729281  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:47.750771  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m04/id_rsa Username:docker}
	I1227 20:16:47.772821  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m04/id_rsa Username:docker}
	I1227 20:16:47.915331  337106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:16:47.990713  337106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:16:47.990795  337106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:16:48.000448  337106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:16:48.000481  337106 start.go:496] detecting cgroup driver to use...
	I1227 20:16:48.000514  337106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:16:48.000573  337106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:16:48.021384  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:16:48.039922  337106 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:16:48.040026  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:16:48.062813  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:16:48.079604  337106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:16:48.252416  337106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:16:48.379968  337106 docker.go:234] disabling docker service ...
	I1227 20:16:48.380079  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:16:48.396866  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:16:48.412804  337106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:16:48.580976  337106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:16:48.708477  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:16:48.723957  337106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:16:48.740271  337106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:16:48.740353  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.751954  337106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:16:48.752031  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.770376  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.788562  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.800161  337106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:16:48.809833  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.820365  337106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.838111  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.851461  337106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:16:48.859082  337106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:16:48.867125  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:49.040301  337106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:16:49.267978  337106 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:16:49.268078  337106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:16:49.275575  337106 start.go:574] Will wait 60s for crictl version
	I1227 20:16:49.275679  337106 ssh_runner.go:195] Run: which crictl
	I1227 20:16:49.281419  337106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:16:49.315494  337106 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:16:49.315644  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:16:49.369281  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:16:49.404637  337106 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:16:49.407552  337106 out.go:179]   - env NO_PROXY=192.168.49.2
	I1227 20:16:49.411293  337106 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1227 20:16:49.414211  337106 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1227 20:16:49.417170  337106 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:16:49.439158  337106 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:16:49.443392  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:16:49.460241  337106 mustload.go:66] Loading cluster: ha-422549
	I1227 20:16:49.460498  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:49.460747  337106 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:16:49.491043  337106 host.go:66] Checking if "ha-422549" exists ...
	I1227 20:16:49.491329  337106 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.5
	I1227 20:16:49.491337  337106 certs.go:195] generating shared ca certs ...
	I1227 20:16:49.491350  337106 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:16:49.491459  337106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:16:49.491497  337106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:16:49.491508  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:16:49.491519  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:16:49.491530  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:16:49.491540  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:16:49.491593  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:16:49.491624  337106 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:16:49.491632  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:16:49.491659  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:16:49.491683  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:16:49.491705  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:16:49.491748  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:16:49.491776  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:16:49.491789  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:49.491812  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:16:49.491829  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:16:49.515784  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:16:49.544429  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:16:49.565837  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:16:49.591774  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:16:49.613222  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:16:49.642392  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:16:49.671654  337106 ssh_runner.go:195] Run: openssl version
	I1227 20:16:49.680550  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:16:49.689578  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:16:49.699039  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:16:49.704553  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:16:49.704616  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:16:49.749850  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:16:49.758256  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:49.766307  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:16:49.776970  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:49.780927  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:49.781029  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:49.822773  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:16:49.830459  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:16:49.838202  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:16:49.847286  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:16:49.851257  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:16:49.851323  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:16:49.895472  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:16:49.903822  337106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:16:49.907501  337106 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:16:49.907548  337106 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.35.0 crio false true} ...
	I1227 20:16:49.907686  337106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:16:49.907776  337106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:16:49.915527  337106 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:16:49.915638  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1227 20:16:49.923067  337106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 20:16:49.936470  337106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:16:49.951403  337106 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:16:49.955422  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:16:49.965541  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:50.111024  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:16:50.130778  337106 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1227 20:16:50.131217  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:50.136553  337106 out.go:179] * Verifying Kubernetes components...
	I1227 20:16:50.139597  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:50.312113  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:16:50.327943  337106 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1227 20:16:50.328030  337106 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1227 20:16:50.328306  337106 node_ready.go:35] waiting up to 6m0s for node "ha-422549-m04" to be "Ready" ...
	I1227 20:16:51.834080  337106 node_ready.go:49] node "ha-422549-m04" is "Ready"
	I1227 20:16:51.834112  337106 node_ready.go:38] duration metric: took 1.505787179s for node "ha-422549-m04" to be "Ready" ...
	I1227 20:16:51.834136  337106 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:16:51.834194  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:16:51.847783  337106 system_svc.go:56] duration metric: took 13.639755ms WaitForService to wait for kubelet
	I1227 20:16:51.847815  337106 kubeadm.go:587] duration metric: took 1.71699582s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:16:51.847835  337106 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:16:51.851110  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:51.851141  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:51.851154  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:51.851159  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:51.851164  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:51.851171  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:51.851174  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:51.851178  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:51.851184  337106 node_conditions.go:105] duration metric: took 3.342441ms to run NodePressure ...
	I1227 20:16:51.851198  337106 start.go:242] waiting for startup goroutines ...
	I1227 20:16:51.851223  337106 start.go:256] writing updated cluster config ...
	I1227 20:16:51.851550  337106 ssh_runner.go:195] Run: rm -f paused
	I1227 20:16:51.855763  337106 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:16:51.856293  337106 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 20:16:51.875834  337106 pod_ready.go:83] waiting for pod "coredns-7d764666f9-mf5xw" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 20:16:53.883849  337106 pod_ready.go:104] pod "coredns-7d764666f9-mf5xw" is not "Ready", error: <nil>
	W1227 20:16:56.461572  337106 pod_ready.go:104] pod "coredns-7d764666f9-mf5xw" is not "Ready", error: <nil>
	I1227 20:16:56.881855  337106 pod_ready.go:94] pod "coredns-7d764666f9-mf5xw" is "Ready"
	I1227 20:16:56.881886  337106 pod_ready.go:86] duration metric: took 5.006014091s for pod "coredns-7d764666f9-mf5xw" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.881896  337106 pod_ready.go:83] waiting for pod "coredns-7d764666f9-n5d9d" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.887788  337106 pod_ready.go:94] pod "coredns-7d764666f9-n5d9d" is "Ready"
	I1227 20:16:56.887818  337106 pod_ready.go:86] duration metric: took 5.91483ms for pod "coredns-7d764666f9-n5d9d" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.891258  337106 pod_ready.go:83] waiting for pod "etcd-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.898397  337106 pod_ready.go:94] pod "etcd-ha-422549" is "Ready"
	I1227 20:16:56.898437  337106 pod_ready.go:86] duration metric: took 7.137144ms for pod "etcd-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.898449  337106 pod_ready.go:83] waiting for pod "etcd-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.906314  337106 pod_ready.go:94] pod "etcd-ha-422549-m02" is "Ready"
	I1227 20:16:56.906341  337106 pod_ready.go:86] duration metric: took 7.885849ms for pod "etcd-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.906352  337106 pod_ready.go:83] waiting for pod "etcd-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:57.076308  337106 request.go:683] "Waited before sending request" delay="167.221744ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m03"
	I1227 20:16:57.080536  337106 pod_ready.go:94] pod "etcd-ha-422549-m03" is "Ready"
	I1227 20:16:57.080564  337106 pod_ready.go:86] duration metric: took 174.205244ms for pod "etcd-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:57.276888  337106 request.go:683] "Waited before sending request" delay="196.187905ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1227 20:16:57.280390  337106 pod_ready.go:83] waiting for pod "kube-apiserver-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:57.476826  337106 request.go:683] "Waited before sending request" delay="196.340204ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-422549"
	I1227 20:16:57.677055  337106 request.go:683] "Waited before sending request" delay="195.372363ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549"
	I1227 20:16:57.680148  337106 pod_ready.go:94] pod "kube-apiserver-ha-422549" is "Ready"
	I1227 20:16:57.680173  337106 pod_ready.go:86] duration metric: took 399.753981ms for pod "kube-apiserver-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:57.680183  337106 pod_ready.go:83] waiting for pod "kube-apiserver-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:57.876636  337106 request.go:683] "Waited before sending request" delay="196.366115ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-422549-m02"
	I1227 20:16:58.076883  337106 request.go:683] "Waited before sending request" delay="195.240889ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m02"
	I1227 20:16:58.081595  337106 pod_ready.go:94] pod "kube-apiserver-ha-422549-m02" is "Ready"
	I1227 20:16:58.081624  337106 pod_ready.go:86] duration metric: took 401.434113ms for pod "kube-apiserver-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:58.081636  337106 pod_ready.go:83] waiting for pod "kube-apiserver-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:58.277078  337106 request.go:683] "Waited before sending request" delay="195.329053ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-422549-m03"
	I1227 20:16:58.476156  337106 request.go:683] "Waited before sending request" delay="193.265737ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m03"
	I1227 20:16:58.479583  337106 pod_ready.go:94] pod "kube-apiserver-ha-422549-m03" is "Ready"
	I1227 20:16:58.479609  337106 pod_ready.go:86] duration metric: took 397.939042ms for pod "kube-apiserver-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:58.677038  337106 request.go:683] "Waited before sending request" delay="197.311256ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1227 20:16:58.680893  337106 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:58.876237  337106 request.go:683] "Waited before sending request" delay="195.249704ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-422549"
	I1227 20:16:59.076160  337106 request.go:683] "Waited before sending request" delay="194.26927ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549"
	I1227 20:16:59.079502  337106 pod_ready.go:94] pod "kube-controller-manager-ha-422549" is "Ready"
	I1227 20:16:59.079531  337106 pod_ready.go:86] duration metric: took 398.612222ms for pod "kube-controller-manager-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:59.079542  337106 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:59.276926  337106 request.go:683] "Waited before sending request" delay="197.310947ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-422549-m02"
	I1227 20:16:59.476987  337106 request.go:683] "Waited before sending request" delay="195.346795ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m02"
	I1227 20:16:59.480256  337106 pod_ready.go:94] pod "kube-controller-manager-ha-422549-m02" is "Ready"
	I1227 20:16:59.480288  337106 pod_ready.go:86] duration metric: took 400.738794ms for pod "kube-controller-manager-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:59.480298  337106 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:59.676709  337106 request.go:683] "Waited before sending request" delay="196.313782ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-422549-m03"
	I1227 20:16:59.876936  337106 request.go:683] "Waited before sending request" delay="194.422474ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m03"
	I1227 20:16:59.880871  337106 pod_ready.go:94] pod "kube-controller-manager-ha-422549-m03" is "Ready"
	I1227 20:16:59.880898  337106 pod_ready.go:86] duration metric: took 400.592723ms for pod "kube-controller-manager-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:00.077121  337106 request.go:683] "Waited before sending request" delay="196.103919ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1227 20:17:00.089664  337106 pod_ready.go:83] waiting for pod "kube-proxy-cg4z5" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:00.277067  337106 request.go:683] "Waited before sending request" delay="187.22976ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg4z5"
	I1227 20:17:00.476439  337106 request.go:683] "Waited before sending request" delay="191.18971ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m03"
	I1227 20:17:00.480835  337106 pod_ready.go:94] pod "kube-proxy-cg4z5" is "Ready"
	I1227 20:17:00.480892  337106 pod_ready.go:86] duration metric: took 391.133363ms for pod "kube-proxy-cg4z5" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:00.480907  337106 pod_ready.go:83] waiting for pod "kube-proxy-kscg6" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:00.676146  337106 request.go:683] "Waited before sending request" delay="195.116873ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kscg6"
	I1227 20:17:00.876152  337106 request.go:683] "Waited before sending request" delay="192.262917ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m04"
	I1227 20:17:00.881008  337106 pod_ready.go:94] pod "kube-proxy-kscg6" is "Ready"
	I1227 20:17:00.881038  337106 pod_ready.go:86] duration metric: took 400.122065ms for pod "kube-proxy-kscg6" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:00.881048  337106 pod_ready.go:83] waiting for pod "kube-proxy-mhmmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:01.076325  337106 request.go:683] "Waited before sending request" delay="195.195166ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mhmmn"
	I1227 20:17:01.276909  337106 request.go:683] "Waited before sending request" delay="195.293101ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549"
	I1227 20:17:01.280680  337106 pod_ready.go:94] pod "kube-proxy-mhmmn" is "Ready"
	I1227 20:17:01.280710  337106 pod_ready.go:86] duration metric: took 399.654071ms for pod "kube-proxy-mhmmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:01.280722  337106 pod_ready.go:83] waiting for pod "kube-proxy-nqr7h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:01.476964  337106 request.go:683] "Waited before sending request" delay="196.12986ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqr7h"
	I1227 20:17:01.676540  337106 request.go:683] "Waited before sending request" delay="192.49818ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m02"
	I1227 20:17:01.685668  337106 pod_ready.go:94] pod "kube-proxy-nqr7h" is "Ready"
	I1227 20:17:01.685702  337106 pod_ready.go:86] duration metric: took 404.972449ms for pod "kube-proxy-nqr7h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:01.876169  337106 request.go:683] "Waited before sending request" delay="190.319322ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1227 20:17:01.882184  337106 pod_ready.go:83] waiting for pod "kube-scheduler-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:02.076778  337106 request.go:683] "Waited before sending request" delay="194.39653ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-422549"
	I1227 20:17:02.277097  337106 request.go:683] "Waited before sending request" delay="189.264505ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549"
	I1227 20:17:02.281682  337106 pod_ready.go:94] pod "kube-scheduler-ha-422549" is "Ready"
	I1227 20:17:02.281718  337106 pod_ready.go:86] duration metric: took 399.422109ms for pod "kube-scheduler-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:02.281728  337106 pod_ready.go:83] waiting for pod "kube-scheduler-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:02.477021  337106 request.go:683] "Waited before sending request" delay="195.180295ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-422549-m02"
	I1227 20:17:02.676336  337106 request.go:683] "Waited before sending request" delay="193.224619ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m02"
	I1227 20:17:02.680037  337106 pod_ready.go:94] pod "kube-scheduler-ha-422549-m02" is "Ready"
	I1227 20:17:02.680112  337106 pod_ready.go:86] duration metric: took 398.375125ms for pod "kube-scheduler-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:02.680126  337106 pod_ready.go:83] waiting for pod "kube-scheduler-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:02.876405  337106 request.go:683] "Waited before sending request" delay="196.195019ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-422549-m03"
	I1227 20:17:03.076174  337106 request.go:683] "Waited before sending request" delay="195.233596ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m03"
	I1227 20:17:03.079768  337106 pod_ready.go:94] pod "kube-scheduler-ha-422549-m03" is "Ready"
	I1227 20:17:03.079800  337106 pod_ready.go:86] duration metric: took 399.666897ms for pod "kube-scheduler-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:03.079847  337106 pod_ready.go:40] duration metric: took 11.224018864s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:17:03.152145  337106 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 20:17:03.155161  337106 out.go:203] 
	W1227 20:17:03.158240  337106 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 20:17:03.161317  337106 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:17:03.164544  337106 out.go:179] * Done! kubectl is now configured to use "ha-422549" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:16:14 ha-422549 crio[669]: time="2025-12-27T20:16:14.963662144Z" level=info msg="Started container" PID=1165 containerID=e30e2fc201d45a408198fe1cf19728fccd5ebe17d0f5255f7589564c690889ec description=kube-system/kube-proxy-mhmmn/kube-proxy id=83f9017b-13c2-4c2b-927f-e22b6986096d name=/runtime.v1.RuntimeService/StartContainer sandboxID=6495c9a31e01c2f5ac17768f9f5e13a5423c5594fc2867804e3bb0a908221252
	Dec 27 20:16:45 ha-422549 conmon[1143]: conmon 7acd50dc5298fb99db44 <ninfo>: container 1152 exited with status 1
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.428315945Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f60cbd10-f7b2-4cd1-80a7-fccba0550911 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.43511179Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=994ad400-2597-4615-b648-cdef116922a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.438853907Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=52ecd850-72c2-4d8c-abb4-bcb68b155882 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.438953761Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.446454815Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.447683161Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/be0e461cabdcf17f5b8d1bb2222c3a204fd930be36abbb0859da36ab3d16462f/merged/etc/passwd: no such file or directory"
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.447776861Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/be0e461cabdcf17f5b8d1bb2222c3a204fd930be36abbb0859da36ab3d16462f/merged/etc/group: no such file or directory"
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.448117445Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.466884564Z" level=info msg="Created container 7361d14a41eae128627f7ec4143721dd6bb4d3ae719e332d08bda13887aca146: kube-system/storage-provisioner/storage-provisioner" id=52ecd850-72c2-4d8c-abb4-bcb68b155882 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.472967068Z" level=info msg="Starting container: 7361d14a41eae128627f7ec4143721dd6bb4d3ae719e332d08bda13887aca146" id=34f02e50-7595-4b71-82ea-dc48fe422b8c name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.475650188Z" level=info msg="Started container" PID=1422 containerID=7361d14a41eae128627f7ec4143721dd6bb4d3ae719e332d08bda13887aca146 description=kube-system/storage-provisioner/storage-provisioner id=34f02e50-7595-4b71-82ea-dc48fe422b8c name=/runtime.v1.RuntimeService/StartContainer sandboxID=735879ad1c236176f8b5399b57a79b6c0ab6195af5a05ee38eac2aa69480249f
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.268998026Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.274112141Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.274149957Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.274171495Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.277419129Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.277535811Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.27759697Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.281296488Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.281332581Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.281356277Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.285112877Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.28514943Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	7361d14a41eae       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   24 seconds ago       Running             storage-provisioner       4                   735879ad1c236       storage-provisioner                 kube-system
	7879d1a6c6a98       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf   55 seconds ago       Running             coredns                   2                   bd06f2852a595       coredns-7d764666f9-mf5xw            kube-system
	0fb071b8bd6b6       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   55 seconds ago       Running             busybox                   2                   cf93f418a9a0a       busybox-769dd8b7dd-k7ks6            default
	7acd50dc5298f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   55 seconds ago       Exited              storage-provisioner       3                   735879ad1c236       storage-provisioner                 kube-system
	e30e2fc201d45       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   55 seconds ago       Running             kube-proxy                2                   6495c9a31e01c       kube-proxy-mhmmn                    kube-system
	595cf90732ea1       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf   55 seconds ago       Running             coredns                   2                   6e45d9e1ac155       coredns-7d764666f9-n5d9d            kube-system
	f4b4244b1db16       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   55 seconds ago       Running             kindnet-cni               2                   828118b404202       kindnet-qkqmv                       kube-system
	8a1b0b47a0ed1       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   55 seconds ago       Running             kube-controller-manager   7                   75a2af3dd93e9       kube-controller-manager-ha-422549   kube-system
	acdd287d4087f       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   About a minute ago   Running             kube-scheduler            2                   ee19621eddf01       kube-scheduler-ha-422549            kube-system
	7c4ac1dbe59ad       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   About a minute ago   Exited              kube-controller-manager   6                   75a2af3dd93e9       kube-controller-manager-ha-422549   kube-system
	6b0b91d1da0a4       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   About a minute ago   Running             kube-apiserver            3                   025c49d6ec070       kube-apiserver-ha-422549            kube-system
	776b31832bd3b       28c5662932f6032ee4faba083d9c2af90232797e1d4f89d9892cb92b26fec299   About a minute ago   Running             kube-vip                  1                   66af5fba1f89e       kube-vip-ha-422549                  kube-system
	97ce57129ce3b       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   About a minute ago   Running             etcd                      2                   77b191af13e7e       etcd-ha-422549                      kube-system
	
	
	==> coredns [595cf90732ea108872ec4fb5764679f01619c8baa8a4aca8307dd9cb64a9120f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:35202 - 54427 "HINFO IN 8582221969168170305.1983723465531701443. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.038347152s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> coredns [7879d1a6c6a98b3b227de2b37ae12cd1a3492d804d3ec108fe982379de5ffd0c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:46822 - 1915 "HINFO IN 1020865313171851806.989409873494633985. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013088569s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-422549
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_03_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:03:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:17:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:16:35 +0000   Sat, 27 Dec 2025 20:03:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:16:35 +0000   Sat, 27 Dec 2025 20:03:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:16:35 +0000   Sat, 27 Dec 2025 20:03:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:16:35 +0000   Sat, 27 Dec 2025 20:09:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-422549
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                acd356f3-8732-454f-9ea5-4ebb90b80a04
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-k7ks6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7d764666f9-mf5xw             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-7d764666f9-n5d9d             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-422549                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-qkqmv                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-422549             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-422549    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-mhmmn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-422549             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-422549                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  12m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  7m21s  node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  53s    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  52s    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  27s    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	
	
	Name:               ha-422549-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_04_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:04:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:17:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:17:04 +0000   Sat, 27 Dec 2025 20:16:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:17:04 +0000   Sat, 27 Dec 2025 20:16:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:17:04 +0000   Sat, 27 Dec 2025 20:16:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:17:04 +0000   Sat, 27 Dec 2025 20:16:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-422549-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                279e934d-6d34-4a11-83f0-a7f36011d6a2
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-v6vks                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-422549-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-5wczs                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-422549-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-422549-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-nqr7h                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-422549-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-422549-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  12m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  7m21s  node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  NodeNotReady    6m31s  node-controller  Node ha-422549-m02 status is now: NodeNotReady
	  Normal  RegisteredNode  53s    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  52s    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  27s    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	
	
	Name:               ha-422549-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_04_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:04:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:17:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:16:41 +0000   Sat, 27 Dec 2025 20:16:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:16:41 +0000   Sat, 27 Dec 2025 20:16:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:16:41 +0000   Sat, 27 Dec 2025 20:16:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:16:41 +0000   Sat, 27 Dec 2025 20:16:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-422549-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                dd826b6d-21ec-45c4-b392-2d4b9b2daddb
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-qcz4b                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-422549-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-28svl                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-422549-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-422549-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-cg4z5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-422549-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-422549-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  12m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  12m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  12m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  7m21s  node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  NodeNotReady    6m31s  node-controller  Node ha-422549-m03 status is now: NodeNotReady
	  Normal  RegisteredNode  53s    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  52s    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  27s    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	
	
	Name:               ha-422549-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_05_33_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:05:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:17:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:16:51 +0000   Sat, 27 Dec 2025 20:16:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:16:51 +0000   Sat, 27 Dec 2025 20:16:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:16:51 +0000   Sat, 27 Dec 2025 20:16:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:16:51 +0000   Sat, 27 Dec 2025 20:16:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-422549-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                45c0e480-898e-46d5-83ce-c457d7b4b021
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4hl7v       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-proxy-kscg6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  10m    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  7m21s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  NodeNotReady    6m31s  node-controller  Node ha-422549-m04 status is now: NodeNotReady
	  Normal  RegisteredNode  53s    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  52s    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  27s    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	
	
	==> dmesg <==
	[Dec27 19:27] overlayfs: idmapped layers are currently not supported
	[Dec27 19:28] overlayfs: idmapped layers are currently not supported
	[ +28.388596] overlayfs: idmapped layers are currently not supported
	[Dec27 19:29] overlayfs: idmapped layers are currently not supported
	[  +9.242530] overlayfs: idmapped layers are currently not supported
	[Dec27 19:30] overlayfs: idmapped layers are currently not supported
	[ +11.577339] overlayfs: idmapped layers are currently not supported
	[Dec27 19:32] overlayfs: idmapped layers are currently not supported
	[ +19.186532] overlayfs: idmapped layers are currently not supported
	[Dec27 19:34] overlayfs: idmapped layers are currently not supported
	[Dec27 19:54] kauditd_printk_skb: 8 callbacks suppressed
	[Dec27 19:56] overlayfs: idmapped layers are currently not supported
	[Dec27 19:59] overlayfs: idmapped layers are currently not supported
	[Dec27 20:00] overlayfs: idmapped layers are currently not supported
	[Dec27 20:03] overlayfs: idmapped layers are currently not supported
	[ +31.019083] overlayfs: idmapped layers are currently not supported
	[Dec27 20:04] overlayfs: idmapped layers are currently not supported
	[Dec27 20:05] overlayfs: idmapped layers are currently not supported
	[Dec27 20:06] overlayfs: idmapped layers are currently not supported
	[Dec27 20:07] overlayfs: idmapped layers are currently not supported
	[  +3.687478] overlayfs: idmapped layers are currently not supported
	[Dec27 20:15] overlayfs: idmapped layers are currently not supported
	[  +3.163851] overlayfs: idmapped layers are currently not supported
	[Dec27 20:16] overlayfs: idmapped layers are currently not supported
	[ +35.129102] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [97ce57129ce3bc803fd62d49e1f3f06d06aa64d93e2ef36f372084cbbd21e34a] <==
	{"level":"warn","ts":"2025-12-27T20:16:25.166605Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc","error":"EOF"}
	{"level":"warn","ts":"2025-12-27T20:16:25.198331Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"1cbc45fdb1f38dc","error":"failed to dial 1cbc45fdb1f38dc on stream Message (EOF)"}
	{"level":"warn","ts":"2025-12-27T20:16:25.227922Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1cbc45fdb1f38dc","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:25.227903Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1cbc45fdb1f38dc","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:25.343755Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc"}
	{"level":"warn","ts":"2025-12-27T20:16:25.769831Z","caller":"etcdserver/cluster_util.go:261","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"1cbc45fdb1f38dc","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:25.769943Z","caller":"etcdserver/cluster_util.go:162","msg":"failed to get version","remote-member-id":"1cbc45fdb1f38dc","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:28.551578Z","caller":"rafthttp/stream.go:193","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc"}
	{"level":"warn","ts":"2025-12-27T20:16:29.772372Z","caller":"etcdserver/cluster_util.go:261","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"1cbc45fdb1f38dc","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:29.772423Z","caller":"etcdserver/cluster_util.go:162","msg":"failed to get version","remote-member-id":"1cbc45fdb1f38dc","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:30.231891Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1cbc45fdb1f38dc","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:30.231953Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1cbc45fdb1f38dc","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:33.773388Z","caller":"etcdserver/cluster_util.go:261","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"1cbc45fdb1f38dc","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:33.773438Z","caller":"etcdserver/cluster_util.go:162","msg":"failed to get version","remote-member-id":"1cbc45fdb1f38dc","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:35.232069Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1cbc45fdb1f38dc","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T20:16:35.232083Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1cbc45fdb1f38dc","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-12-27T20:16:37.257347Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"1cbc45fdb1f38dc","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-27T20:16:37.257386Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"1cbc45fdb1f38dc"}
	{"level":"info","ts":"2025-12-27T20:16:37.257399Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc"}
	{"level":"info","ts":"2025-12-27T20:16:37.267581Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"1cbc45fdb1f38dc","stream-type":"stream Message"}
	{"level":"info","ts":"2025-12-27T20:16:37.267620Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc"}
	{"level":"info","ts":"2025-12-27T20:16:37.295096Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc"}
	{"level":"info","ts":"2025-12-27T20:16:37.295396Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1cbc45fdb1f38dc"}
	{"level":"warn","ts":"2025-12-27T20:17:06.414126Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"199.990934ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 ","response":"range_response_count:500 size:371820"}
	{"level":"info","ts":"2025-12-27T20:17:06.414197Z","caller":"traceutil/trace.go:172","msg":"trace[128322948] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:500; response_revision:2982; }","duration":"200.078275ms","start":"2025-12-27T20:17:06.214106Z","end":"2025-12-27T20:17:06.414184Z","steps":["trace[128322948] 'range keys from bolt db'  (duration: 198.878572ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:17:10 up  1:59,  0 user,  load average: 1.71, 1.25, 1.37
	Linux ha-422549 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f4b4244b1db16ca451154424e89d4d56ce2b826c6f69b1c1fa82f892e7966881] <==
	E1227 20:16:45.273950       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1227 20:16:45.285766       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 20:16:45.285845       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1227 20:16:46.769029       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:16:46.769154       1 metrics.go:72] Registering metrics
	I1227 20:16:46.769261       1 controller.go:711] "Syncing nftables rules"
	I1227 20:16:55.268126       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1227 20:16:55.268228       1 main.go:324] Node ha-422549-m03 has CIDR [10.244.2.0/24] 
	I1227 20:16:55.268426       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.49.4 Flags: [] Table: 0 Realm: 0} 
	I1227 20:16:55.268521       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1227 20:16:55.268535       1 main.go:324] Node ha-422549-m04 has CIDR [10.244.3.0/24] 
	I1227 20:16:55.268588       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0 Realm: 0} 
	I1227 20:16:55.268639       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 20:16:55.268652       1 main.go:301] handling current node
	I1227 20:16:55.274378       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1227 20:16:55.277916       1 main.go:324] Node ha-422549-m02 has CIDR [10.244.1.0/24] 
	I1227 20:16:55.278084       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0 Realm: 0} 
	I1227 20:17:05.268989       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 20:17:05.269024       1 main.go:301] handling current node
	I1227 20:17:05.269041       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1227 20:17:05.269047       1 main.go:324] Node ha-422549-m02 has CIDR [10.244.1.0/24] 
	I1227 20:17:05.269196       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1227 20:17:05.269272       1 main.go:324] Node ha-422549-m03 has CIDR [10.244.2.0/24] 
	I1227 20:17:05.269415       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1227 20:17:05.269497       1 main.go:324] Node ha-422549-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [6b0b91d1da0a4c385d0d3110ebc1d18efbc54bab7d6da6bba31c072f2fbd4da9] <==
	I1227 20:16:13.796413       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:13.797072       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 20:16:13.797074       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 20:16:13.797100       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:13.797777       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 20:16:13.797963       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 20:16:13.798046       1 aggregator.go:187] initial CRD sync complete...
	I1227 20:16:13.798090       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 20:16:13.798127       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:16:13.798158       1 cache.go:39] Caches are synced for autoregister controller
	E1227 20:16:13.804997       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 20:16:13.818967       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:13.818980       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 20:16:13.819043       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:16:13.824892       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:16:13.829882       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:16:13.856520       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:16:13.903885       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:16:14.353399       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1227 20:16:16.144077       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1227 20:16:16.145490       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:16:16.162091       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:16:17.856302       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:16:18.028352       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 20:16:18.100041       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [7c4ac1dbe59ad7d3143dfe74886a6bc3058bfad37ae864b855a6e47c1a4d984e] <==
	I1227 20:15:51.302678       1 serving.go:386] Generated self-signed cert in-memory
	I1227 20:15:51.319186       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1227 20:15:51.319285       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:15:51.320999       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1227 20:15:51.321146       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1227 20:15:51.321625       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1227 20:15:51.321698       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1227 20:16:13.577648       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [8a1b0b47a0ed1caecc63a10c0f1f9666bd9ee325c50ecf1f6c7e085c9598dbfa] <==
	I1227 20:16:17.628599       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.628621       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.628679       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.633925       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.634025       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 20:16:17.634653       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.634716       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.634834       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.634959       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.635096       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.635317       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.635492       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.635766       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.656398       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.659067       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.751050       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m02"
	I1227 20:16:17.752259       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m03"
	I1227 20:16:17.752315       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m04"
	I1227 20:16:17.752343       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549"
	I1227 20:16:17.820816       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.820838       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:16:17.820843       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:16:17.829110       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.887401       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 20:16:51.537342       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-422549-m04"
	
	
	==> kube-proxy [e30e2fc201d45a408198fe1cf19728fccd5ebe17d0f5255f7589564c690889ec] <==
	I1227 20:16:15.717666       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:16:16.119519       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:16:16.241830       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:16.241930       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1227 20:16:16.242046       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:16:16.278310       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:16:16.278410       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:16:16.293265       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:16:16.293750       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:16:16.293812       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:16:16.298528       1 config.go:200] "Starting service config controller"
	I1227 20:16:16.298607       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:16:16.298663       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:16:16.298690       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:16:16.302047       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:16:16.303313       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:16:16.304201       1 config.go:309] "Starting node config controller"
	I1227 20:16:16.304276       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:16:16.304307       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:16:16.399041       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:16:16.402314       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 20:16:16.412735       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [acdd287d4087fec2c7c00eb589c13b06231128c1441e2db4a8f74c57600a6e67] <==
	I1227 20:16:11.576174       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:16:11.578273       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 20:16:11.585603       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:16:11.585856       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:16:11.585620       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1227 20:16:13.654680       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:16:13.654770       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:16:13.654897       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:16:13.654960       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 20:16:13.655015       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 20:16:13.655071       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 20:16:13.655125       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:16:13.655182       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 20:16:13.655240       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:16:13.655293       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:16:13.655342       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 20:16:13.655393       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:16:13.655511       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 20:16:13.655554       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:16:13.655597       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:16:13.655648       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:16:13.655681       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:16:13.655786       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 20:16:13.723865       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	I1227 20:16:15.292118       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:16:14 ha-422549 kubelet[804]: I1227 20:16:14.329398     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d69029af-1fc4-4a31-913e-92e1231e845a-lib-modules\") pod \"kube-proxy-mhmmn\" (UID: \"d69029af-1fc4-4a31-913e-92e1231e845a\") " pod="kube-system/kube-proxy-mhmmn"
	Dec 27 20:16:14 ha-422549 kubelet[804]: I1227 20:16:14.329542     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d69029af-1fc4-4a31-913e-92e1231e845a-xtables-lock\") pod \"kube-proxy-mhmmn\" (UID: \"d69029af-1fc4-4a31-913e-92e1231e845a\") " pod="kube-system/kube-proxy-mhmmn"
	Dec 27 20:16:14 ha-422549 kubelet[804]: I1227 20:16:14.329646     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66d834ae-af1b-456d-ae48-8a0d6608f961-xtables-lock\") pod \"kindnet-qkqmv\" (UID: \"66d834ae-af1b-456d-ae48-8a0d6608f961\") " pod="kube-system/kindnet-qkqmv"
	Dec 27 20:16:14 ha-422549 kubelet[804]: I1227 20:16:14.329783     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/66d834ae-af1b-456d-ae48-8a0d6608f961-cni-cfg\") pod \"kindnet-qkqmv\" (UID: \"66d834ae-af1b-456d-ae48-8a0d6608f961\") " pod="kube-system/kindnet-qkqmv"
	Dec 27 20:16:14 ha-422549 kubelet[804]: I1227 20:16:14.381247     804 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 27 20:16:14 ha-422549 kubelet[804]: W1227 20:16:14.683959     804 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/crio-735879ad1c236176f8b5399b57a79b6c0ab6195af5a05ee38eac2aa69480249f WatchSource:0}: Error finding container 735879ad1c236176f8b5399b57a79b6c0ab6195af5a05ee38eac2aa69480249f: Status 404 returned error can't find the container with id 735879ad1c236176f8b5399b57a79b6c0ab6195af5a05ee38eac2aa69480249f
	Dec 27 20:16:14 ha-422549 kubelet[804]: W1227 20:16:14.706665     804 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/crio-cf93f418a9a0a915233d2584b9d75339bc5bcc13264ad5d080fc2f42d9ebaff8 WatchSource:0}: Error finding container cf93f418a9a0a915233d2584b9d75339bc5bcc13264ad5d080fc2f42d9ebaff8: Status 404 returned error can't find the container with id cf93f418a9a0a915233d2584b9d75339bc5bcc13264ad5d080fc2f42d9ebaff8
	Dec 27 20:16:15 ha-422549 kubelet[804]: E1227 20:16:15.322797     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:16:15 ha-422549 kubelet[804]: E1227 20:16:15.333130     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:16:15 ha-422549 kubelet[804]: E1227 20:16:15.350577     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:16:15 ha-422549 kubelet[804]: I1227 20:16:15.550682     804 kubelet_node_status.go:74] "Attempting to register node" node="ha-422549"
	Dec 27 20:16:15 ha-422549 kubelet[804]: I1227 20:16:15.614938     804 kubelet_node_status.go:123] "Node was previously registered" node="ha-422549"
	Dec 27 20:16:15 ha-422549 kubelet[804]: I1227 20:16:15.615228     804 kubelet_node_status.go:77] "Successfully registered node" node="ha-422549"
	Dec 27 20:16:15 ha-422549 kubelet[804]: I1227 20:16:15.615315     804 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 27 20:16:15 ha-422549 kubelet[804]: I1227 20:16:15.616294     804 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 27 20:16:16 ha-422549 kubelet[804]: E1227 20:16:16.196898     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-ha-422549" containerName="kube-scheduler"
	Dec 27 20:16:16 ha-422549 kubelet[804]: E1227 20:16:16.353325     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:16:16 ha-422549 kubelet[804]: E1227 20:16:16.354607     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:16:20 ha-422549 kubelet[804]: E1227 20:16:20.687129     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:16:21 ha-422549 kubelet[804]: E1227 20:16:21.706076     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	Dec 27 20:16:22 ha-422549 kubelet[804]: E1227 20:16:22.368737     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	Dec 27 20:16:30 ha-422549 kubelet[804]: E1227 20:16:30.696140     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:16:45 ha-422549 kubelet[804]: I1227 20:16:45.426555     804 scope.go:122] "RemoveContainer" containerID="7acd50dc5298fb99db44502b466c9e34b79ddce5613479143c4c5834f09f1731"
	Dec 27 20:16:56 ha-422549 kubelet[804]: E1227 20:16:56.356173     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:16:56 ha-422549 kubelet[804]: E1227 20:16:56.356735     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-422549 -n ha-422549
helpers_test.go:270: (dbg) Run:  kubectl --context ha-422549 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (85.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 node add --control-plane --alsologtostderr -v 5
E1227 20:17:13.966630  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:17:54.129674  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-422549 node add --control-plane --alsologtostderr -v 5: (1m21.570096664s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5: (1.296109948s)
ha_test.go:618: status says not all three control-plane nodes are present: args "out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5": ha-422549
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-422549-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:621: status says not all four hosts are running: args "out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5": ha-422549
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-422549-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:624: status says not all four kubelets are running: args "out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5": ha-422549
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-422549-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:627: status says not all three apiservers are running: args "out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5": ha-422549
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-422549-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-422549-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-422549
helpers_test.go:244: (dbg) docker inspect ha-422549:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf",
	        "Created": "2025-12-27T20:03:01.682141141Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 337233,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:15:42.462104956Z",
	            "FinishedAt": "2025-12-27T20:15:41.57505881Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/hostname",
	        "HostsPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/hosts",
	        "LogPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf-json.log",
	        "Name": "/ha-422549",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-422549:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-422549",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf",
	                "LowerDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/merged",
	                "UpperDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/diff",
	                "WorkDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-422549",
	                "Source": "/var/lib/docker/volumes/ha-422549/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422549",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422549",
	                "name.minikube.sigs.k8s.io": "ha-422549",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bb71ec3c47b900c0fa3f8d54314b359c784cf244167438faa167df26866a5f2b",
	            "SandboxKey": "/var/run/docker/netns/bb71ec3c47b9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422549": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:de:7f:b9:2b:dc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9521cb9225c5842f69a8435c5cf5485b75f9a8b2c68158742ff27c2be32f5951",
	                    "EndpointID": "8d5c856b7af95de0f10e89f9cba406f7c7feb68311acbe9cee0239ed57d8152d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422549",
	                        "53fd780c3df5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-422549 -n ha-422549
helpers_test.go:253: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-422549 logs -n 25: (1.784780212s)
helpers_test.go:261: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-422549 ssh -n ha-422549-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test_ha-422549-m03_ha-422549-m04.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp testdata/cp-test.txt ha-422549-m04:/home/docker/cp-test.txt                                                             │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3848759327/001/cp-test_ha-422549-m04.txt │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549:/home/docker/cp-test_ha-422549-m04_ha-422549.txt                       │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549 sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549.txt                                                 │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549-m02:/home/docker/cp-test_ha-422549-m04_ha-422549-m02.txt               │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m02 sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549-m02.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549-m03:/home/docker/cp-test_ha-422549-m04_ha-422549-m03.txt               │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m03 sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549-m03.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ node    │ ha-422549 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ node    │ ha-422549 node start m02 --alsologtostderr -v 5                                                                                      │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ node    │ ha-422549 node list --alsologtostderr -v 5                                                                                           │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │                     │
	│ stop    │ ha-422549 stop --alsologtostderr -v 5                                                                                                │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:07 UTC │
	│ start   │ ha-422549 start --wait true --alsologtostderr -v 5                                                                                   │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:07 UTC │                     │
	│ node    │ ha-422549 node list --alsologtostderr -v 5                                                                                           │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:15 UTC │                     │
	│ node    │ ha-422549 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:15 UTC │                     │
	│ stop    │ ha-422549 stop --alsologtostderr -v 5                                                                                                │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:15 UTC │ 27 Dec 25 20:15 UTC │
	│ start   │ ha-422549 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:15 UTC │ 27 Dec 25 20:17 UTC │
	│ node    │ ha-422549 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:17 UTC │ 27 Dec 25 20:18 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:15:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:15:42.161076  337106 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:15:42.161339  337106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:15:42.161371  337106 out.go:374] Setting ErrFile to fd 2...
	I1227 20:15:42.161395  337106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:15:42.161910  337106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:15:42.162549  337106 out.go:368] Setting JSON to false
	I1227 20:15:42.163583  337106 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7095,"bootTime":1766859448,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:15:42.163745  337106 start.go:143] virtualization:  
	I1227 20:15:42.167252  337106 out.go:179] * [ha-422549] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:15:42.171750  337106 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:15:42.172029  337106 notify.go:221] Checking for updates...
	I1227 20:15:42.178183  337106 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:15:42.181404  337106 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:15:42.184507  337106 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:15:42.187835  337106 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:15:42.191251  337106 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:15:42.194951  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:42.195780  337106 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:15:42.234793  337106 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:15:42.234922  337106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:15:42.302450  337106 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 20:15:42.291742685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:15:42.302570  337106 docker.go:319] overlay module found
	I1227 20:15:42.305766  337106 out.go:179] * Using the docker driver based on existing profile
	I1227 20:15:42.308585  337106 start.go:309] selected driver: docker
	I1227 20:15:42.308605  337106 start.go:928] validating driver "docker" against &{Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:15:42.308760  337106 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:15:42.308874  337106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:15:42.372262  337106 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 20:15:42.36286995 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:15:42.372694  337106 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:15:42.372727  337106 cni.go:84] Creating CNI manager for ""
	I1227 20:15:42.372789  337106 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1227 20:15:42.372841  337106 start.go:353] cluster config:
	{Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:15:42.376040  337106 out.go:179] * Starting "ha-422549" primary control-plane node in "ha-422549" cluster
	I1227 20:15:42.378965  337106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:15:42.382020  337106 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:15:42.384910  337106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:15:42.384967  337106 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:15:42.385060  337106 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:15:42.385090  337106 cache.go:65] Caching tarball of preloaded images
	I1227 20:15:42.385178  337106 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:15:42.385188  337106 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:15:42.385327  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:15:42.406731  337106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:15:42.406754  337106 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:15:42.406775  337106 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:15:42.406807  337106 start.go:360] acquireMachinesLock for ha-422549: {Name:mk939e8ee4c2bedc86cc6a99d76298e7b2a26ce2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:15:42.406878  337106 start.go:364] duration metric: took 49.87µs to acquireMachinesLock for "ha-422549"
	I1227 20:15:42.406911  337106 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:15:42.406918  337106 fix.go:54] fixHost starting: 
	I1227 20:15:42.407176  337106 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:15:42.424618  337106 fix.go:112] recreateIfNeeded on ha-422549: state=Stopped err=<nil>
	W1227 20:15:42.424651  337106 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:15:42.429793  337106 out.go:252] * Restarting existing docker container for "ha-422549" ...
	I1227 20:15:42.429887  337106 cli_runner.go:164] Run: docker start ha-422549
	I1227 20:15:42.679169  337106 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:15:42.705015  337106 kic.go:430] container "ha-422549" state is running.
	I1227 20:15:42.705398  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:15:42.726555  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:15:42.726800  337106 machine.go:94] provisionDockerMachine start ...
	I1227 20:15:42.726868  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:42.751689  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:42.752020  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1227 20:15:42.752029  337106 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:15:42.752567  337106 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60238->127.0.0.1:33183: read: connection reset by peer
	I1227 20:15:45.888954  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549
	
	I1227 20:15:45.888987  337106 ubuntu.go:182] provisioning hostname "ha-422549"
	I1227 20:15:45.889052  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:45.906473  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:45.906784  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1227 20:15:45.906800  337106 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549 && echo "ha-422549" | sudo tee /etc/hostname
	I1227 20:15:46.050632  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549
	
	I1227 20:15:46.050726  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:46.069043  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:46.069357  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1227 20:15:46.069378  337106 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:15:46.210430  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:15:46.210454  337106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:15:46.210475  337106 ubuntu.go:190] setting up certificates
	I1227 20:15:46.210485  337106 provision.go:84] configureAuth start
	I1227 20:15:46.210557  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:15:46.227543  337106 provision.go:143] copyHostCerts
	I1227 20:15:46.227593  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:15:46.227625  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:15:46.227646  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:15:46.227726  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:15:46.227825  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:15:46.227847  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:15:46.227858  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:15:46.227890  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:15:46.227942  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:15:46.227963  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:15:46.227975  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:15:46.228004  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:15:46.228059  337106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549 san=[127.0.0.1 192.168.49.2 ha-422549 localhost minikube]
	I1227 20:15:46.477651  337106 provision.go:177] copyRemoteCerts
	I1227 20:15:46.477745  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:15:46.477812  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:46.494398  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:46.592817  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:15:46.592877  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1227 20:15:46.609148  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:15:46.609214  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:15:46.626129  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:15:46.626186  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:15:46.643096  337106 provision.go:87] duration metric: took 432.58782ms to configureAuth
	I1227 20:15:46.643124  337106 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:15:46.643376  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:46.643487  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:46.660667  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:46.661005  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1227 20:15:46.661026  337106 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:15:47.007057  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:15:47.007122  337106 machine.go:97] duration metric: took 4.280312247s to provisionDockerMachine
	I1227 20:15:47.007150  337106 start.go:293] postStartSetup for "ha-422549" (driver="docker")
	I1227 20:15:47.007178  337106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:15:47.007279  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:15:47.007348  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:47.029053  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:47.129052  337106 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:15:47.132168  337106 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:15:47.132192  337106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:15:47.132203  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:15:47.132254  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:15:47.132333  337106 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:15:47.132339  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:15:47.132433  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:15:47.139569  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:15:47.156024  337106 start.go:296] duration metric: took 148.843658ms for postStartSetup
	I1227 20:15:47.156149  337106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:15:47.156211  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:47.173109  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:47.266513  337106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:15:47.270816  337106 fix.go:56] duration metric: took 4.86389233s for fixHost
	I1227 20:15:47.270844  337106 start.go:83] releasing machines lock for "ha-422549", held for 4.863953055s
	I1227 20:15:47.270913  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:15:47.287367  337106 ssh_runner.go:195] Run: cat /version.json
	I1227 20:15:47.287429  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:47.287703  337106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:15:47.287764  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:47.309269  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:47.309529  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:47.405178  337106 ssh_runner.go:195] Run: systemctl --version
	I1227 20:15:47.511199  337106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:15:47.547392  337106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:15:47.551737  337106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:15:47.551827  337106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:15:47.559324  337106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:15:47.559347  337106 start.go:496] detecting cgroup driver to use...
	I1227 20:15:47.559388  337106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:15:47.559434  337106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:15:47.574366  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:15:47.587100  337106 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:15:47.587164  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:15:47.602600  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:15:47.615779  337106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:15:47.738070  337106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:15:47.863690  337106 docker.go:234] disabling docker service ...
	I1227 20:15:47.863793  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:15:47.878841  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:15:47.891780  337106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:15:48.005581  337106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:15:48.146501  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:15:48.159335  337106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:15:48.172971  337106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:15:48.173057  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.182022  337106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:15:48.182123  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.190766  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.199691  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.208613  337106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:15:48.216583  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.225357  337106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.238325  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.247144  337106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:15:48.254972  337106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:15:48.262335  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:15:48.380620  337106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:15:48.551875  337106 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:15:48.551947  337106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:15:48.555685  337106 start.go:574] Will wait 60s for crictl version
	I1227 20:15:48.555757  337106 ssh_runner.go:195] Run: which crictl
	I1227 20:15:48.559221  337106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:15:48.585662  337106 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:15:48.585789  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:15:48.613651  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:15:48.644252  337106 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:15:48.647214  337106 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:15:48.663170  337106 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:15:48.666927  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:15:48.676701  337106 kubeadm.go:884] updating cluster {Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:15:48.676861  337106 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:15:48.676926  337106 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:15:48.713302  337106 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:15:48.713323  337106 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:15:48.713375  337106 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:15:48.738578  337106 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:15:48.738606  337106 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:15:48.738615  337106 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I1227 20:15:48.738716  337106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:15:48.738798  337106 ssh_runner.go:195] Run: crio config
	I1227 20:15:48.806339  337106 cni.go:84] Creating CNI manager for ""
	I1227 20:15:48.806361  337106 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1227 20:15:48.806383  337106 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:15:48.806406  337106 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422549 NodeName:ha-422549 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:15:48.806540  337106 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422549"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:15:48.806566  337106 kube-vip.go:115] generating kube-vip config ...
	I1227 20:15:48.806619  337106 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 20:15:48.818243  337106 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:15:48.818375  337106 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 20:15:48.818447  337106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:15:48.825705  337106 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:15:48.825785  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1227 20:15:48.832852  337106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1227 20:15:48.844713  337106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:15:48.856701  337106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1227 20:15:48.868844  337106 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 20:15:48.880915  337106 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:15:48.884598  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:15:48.893875  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:15:49.019776  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:15:49.036215  337106 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.2
	I1227 20:15:49.036242  337106 certs.go:195] generating shared ca certs ...
	I1227 20:15:49.036258  337106 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:15:49.036390  337106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:15:49.036447  337106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:15:49.036460  337106 certs.go:257] generating profile certs ...
	I1227 20:15:49.036541  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key
	I1227 20:15:49.036611  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.743f7ef3
	I1227 20:15:49.036653  337106 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key
	I1227 20:15:49.036666  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:15:49.036679  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:15:49.036694  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:15:49.036704  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:15:49.036720  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:15:49.036731  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:15:49.036746  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:15:49.036756  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:15:49.036804  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:15:49.036836  337106 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:15:49.036848  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:15:49.036874  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:15:49.036910  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:15:49.036939  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:15:49.037002  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:15:49.037036  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:15:49.037057  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:49.037072  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:15:49.037704  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:15:49.057400  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:15:49.076605  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:15:49.095621  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:15:49.115441  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:15:49.135019  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:15:49.162312  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:15:49.179956  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:15:49.203774  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:15:49.228107  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:15:49.246930  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:15:49.265916  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:15:49.281838  337106 ssh_runner.go:195] Run: openssl version
	I1227 20:15:49.287989  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:49.295912  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:15:49.303435  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:49.307018  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:49.307115  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:49.347922  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:15:49.354929  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:15:49.361715  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:15:49.368688  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:15:49.372719  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:15:49.372798  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:15:49.413917  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:15:49.421060  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:15:49.428016  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:15:49.435273  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:15:49.438964  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:15:49.439075  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:15:49.480693  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:15:49.488361  337106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:15:49.492062  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:15:49.532621  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:15:49.573227  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:15:49.615004  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:15:49.660835  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:15:49.706320  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:15:49.793965  337106 kubeadm.go:401] StartCluster: {Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:15:49.794119  337106 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:15:49.794193  337106 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:15:49.873661  337106 cri.go:96] found id: "acdd287d4087fec2c7c00eb589c13b06231128c1441e2db4a8f74c57600a6e67"
	I1227 20:15:49.873685  337106 cri.go:96] found id: "7c4ac1dbe59ad7d3143dfe74886a6bc3058bfad37ae864b855a6e47c1a4d984e"
	I1227 20:15:49.873690  337106 cri.go:96] found id: "6b0b91d1da0a4c385d0d3110ebc1d18efbc54bab7d6da6bba31c072f2fbd4da9"
	I1227 20:15:49.873694  337106 cri.go:96] found id: "776b31832bd3b44eb905f188f6aa9c0428a287ba7eaeb4ed172dd8bef1b5795b"
	I1227 20:15:49.873697  337106 cri.go:96] found id: "97ce57129ce3bc803fd62d49e1f3f06d06aa64d93e2ef36f372084cbbd21e34a"
	I1227 20:15:49.873717  337106 cri.go:96] found id: ""
	I1227 20:15:49.873771  337106 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:15:49.891661  337106 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:15:49Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:15:49.891749  337106 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:15:49.906600  337106 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:15:49.906624  337106 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:15:49.906703  337106 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:15:49.919028  337106 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:15:49.919479  337106 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-422549" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:15:49.919620  337106 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-272475/kubeconfig needs updating (will repair): [kubeconfig missing "ha-422549" cluster setting kubeconfig missing "ha-422549" context setting]
	I1227 20:15:49.919957  337106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:15:49.920555  337106 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 20:15:49.921302  337106 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 20:15:49.921327  337106 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 20:15:49.921333  337106 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 20:15:49.921364  337106 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1227 20:15:49.921405  337106 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 20:15:49.921411  337106 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 20:15:49.921423  337106 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 20:15:49.921745  337106 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:15:49.936013  337106 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1227 20:15:49.936040  337106 kubeadm.go:602] duration metric: took 29.409884ms to restartPrimaryControlPlane
	I1227 20:15:49.936051  337106 kubeadm.go:403] duration metric: took 142.110676ms to StartCluster
	I1227 20:15:49.936075  337106 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:15:49.936142  337106 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:15:49.937228  337106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:15:49.937930  337106 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:15:49.938100  337106 start.go:242] waiting for startup goroutines ...
	I1227 20:15:49.938130  337106 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:15:49.939423  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:49.942218  337106 out.go:179] * Enabled addons: 
	I1227 20:15:49.945329  337106 addons.go:530] duration metric: took 7.202537ms for enable addons: enabled=[]
	I1227 20:15:49.945417  337106 start.go:247] waiting for cluster config update ...
	I1227 20:15:49.945442  337106 start.go:256] writing updated cluster config ...
	I1227 20:15:49.948818  337106 out.go:203] 
	I1227 20:15:49.952226  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:49.952424  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:15:49.955848  337106 out.go:179] * Starting "ha-422549-m02" control-plane node in "ha-422549" cluster
	I1227 20:15:49.958975  337106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:15:49.962204  337106 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:15:49.965179  337106 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:15:49.965273  337106 cache.go:65] Caching tarball of preloaded images
	I1227 20:15:49.965249  337106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:15:49.965709  337106 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:15:49.965749  337106 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:15:49.965939  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:15:49.990566  337106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:15:49.990585  337106 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:15:49.990599  337106 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:15:49.990629  337106 start.go:360] acquireMachinesLock for ha-422549-m02: {Name:mk8fc7aa5d6c41749cc4b9db094e3fd243d8b868 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:15:49.990677  337106 start.go:364] duration metric: took 33.255µs to acquireMachinesLock for "ha-422549-m02"
	I1227 20:15:49.990697  337106 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:15:49.990704  337106 fix.go:54] fixHost starting: m02
	I1227 20:15:49.990960  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m02 --format={{.State.Status}}
	I1227 20:15:50.012661  337106 fix.go:112] recreateIfNeeded on ha-422549-m02: state=Stopped err=<nil>
	W1227 20:15:50.012689  337106 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:15:50.016334  337106 out.go:252] * Restarting existing docker container for "ha-422549-m02" ...
	I1227 20:15:50.016437  337106 cli_runner.go:164] Run: docker start ha-422549-m02
	I1227 20:15:50.398628  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m02 --format={{.State.Status}}
	I1227 20:15:50.427580  337106 kic.go:430] container "ha-422549-m02" state is running.
	I1227 20:15:50.427943  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:15:50.459424  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:15:50.459657  337106 machine.go:94] provisionDockerMachine start ...
	I1227 20:15:50.459714  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:50.490531  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:50.493631  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1227 20:15:50.493650  337106 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:15:50.494339  337106 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 20:15:53.641274  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m02
	
	I1227 20:15:53.641349  337106 ubuntu.go:182] provisioning hostname "ha-422549-m02"
	I1227 20:15:53.641467  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:53.663080  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:53.663387  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1227 20:15:53.663406  337106 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549-m02 && echo "ha-422549-m02" | sudo tee /etc/hostname
	I1227 20:15:53.819054  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m02
	
	I1227 20:15:53.819139  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:53.847197  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:53.847500  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1227 20:15:53.847516  337106 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:15:53.989824  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:15:53.989849  337106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:15:53.989866  337106 ubuntu.go:190] setting up certificates
	I1227 20:15:53.989878  337106 provision.go:84] configureAuth start
	I1227 20:15:53.989941  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:15:54.009870  337106 provision.go:143] copyHostCerts
	I1227 20:15:54.009915  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:15:54.009950  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:15:54.009964  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:15:54.010041  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:15:54.010125  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:15:54.010148  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:15:54.010153  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:15:54.010182  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:15:54.010267  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:15:54.010289  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:15:54.010297  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:15:54.010323  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:15:54.010374  337106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549-m02 san=[127.0.0.1 192.168.49.3 ha-422549-m02 localhost minikube]
	I1227 20:15:54.260286  337106 provision.go:177] copyRemoteCerts
	I1227 20:15:54.260405  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:15:54.260467  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:54.278663  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:15:54.377066  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:15:54.377172  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:15:54.395067  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:15:54.395180  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 20:15:54.412398  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:15:54.412507  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:15:54.429091  337106 provision.go:87] duration metric: took 439.199295ms to configureAuth
	I1227 20:15:54.429119  337106 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:15:54.429346  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:54.429480  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:54.446402  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:54.446712  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1227 20:15:54.446736  337106 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:15:54.817328  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:15:54.817351  337106 machine.go:97] duration metric: took 4.357685623s to provisionDockerMachine
	I1227 20:15:54.817363  337106 start.go:293] postStartSetup for "ha-422549-m02" (driver="docker")
	I1227 20:15:54.817373  337106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:15:54.817438  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:15:54.817558  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:54.834291  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:15:54.933155  337106 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:15:54.936441  337106 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:15:54.936469  337106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:15:54.936480  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:15:54.936536  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:15:54.936618  337106 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:15:54.936632  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:15:54.936739  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:15:54.944112  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:15:54.961353  337106 start.go:296] duration metric: took 143.973459ms for postStartSetup
	I1227 20:15:54.961439  337106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:15:54.961529  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:54.978679  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:15:55.075001  337106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:15:55.080166  337106 fix.go:56] duration metric: took 5.089454661s for fixHost
	I1227 20:15:55.080193  337106 start.go:83] releasing machines lock for "ha-422549-m02", held for 5.089507139s
	I1227 20:15:55.080267  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:15:55.100982  337106 out.go:179] * Found network options:
	I1227 20:15:55.103953  337106 out.go:179]   - NO_PROXY=192.168.49.2
	W1227 20:15:55.106802  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:15:55.106845  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 20:15:55.106919  337106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:15:55.106964  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:55.107011  337106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:15:55.107066  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:55.130151  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:15:55.137687  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:15:55.324223  337106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:15:55.328436  337106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:15:55.328502  337106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:15:55.336088  337106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:15:55.336120  337106 start.go:496] detecting cgroup driver to use...
	I1227 20:15:55.336165  337106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:15:55.336216  337106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:15:55.350639  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:15:55.363702  337106 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:15:55.363812  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:15:55.380023  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:15:55.396017  337106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:15:55.627299  337106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:15:55.867067  337106 docker.go:234] disabling docker service ...
	I1227 20:15:55.867179  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:15:55.887006  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:15:55.903434  337106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:15:56.147368  337106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:15:56.372701  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:15:56.386071  337106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:15:56.438830  337106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:15:56.438945  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.453154  337106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:15:56.453272  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.469839  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.480255  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.492229  337106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:15:56.504717  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.522023  337106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.536543  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.549900  337106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:15:56.562631  337106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:15:56.570307  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:15:56.790142  337106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:15:57.038862  337106 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:15:57.038970  337106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:15:57.042575  337106 start.go:574] Will wait 60s for crictl version
	I1227 20:15:57.042675  337106 ssh_runner.go:195] Run: which crictl
	I1227 20:15:57.046123  337106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:15:57.079472  337106 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:15:57.079604  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:15:57.111539  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:15:57.144245  337106 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:15:57.147176  337106 out.go:179]   - env NO_PROXY=192.168.49.2
	I1227 20:15:57.150339  337106 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:15:57.166874  337106 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:15:57.170704  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:15:57.180393  337106 mustload.go:66] Loading cluster: ha-422549
	I1227 20:15:57.180638  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:57.180911  337106 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:15:57.198058  337106 host.go:66] Checking if "ha-422549" exists ...
	I1227 20:15:57.198339  337106 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.3
	I1227 20:15:57.198353  337106 certs.go:195] generating shared ca certs ...
	I1227 20:15:57.198367  337106 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:15:57.198490  337106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:15:57.198538  337106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:15:57.198549  337106 certs.go:257] generating profile certs ...
	I1227 20:15:57.198625  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key
	I1227 20:15:57.198688  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.982843aa
	I1227 20:15:57.198735  337106 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key
	I1227 20:15:57.198748  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:15:57.198762  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:15:57.198779  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:15:57.198791  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:15:57.198810  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:15:57.198822  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:15:57.198837  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:15:57.198847  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:15:57.198901  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:15:57.198935  337106 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:15:57.198948  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:15:57.198974  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:15:57.199001  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:15:57.199031  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:15:57.199079  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:15:57.199116  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:15:57.199131  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:15:57.199146  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:57.199227  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:57.217178  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:57.309803  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1227 20:15:57.313760  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1227 20:15:57.321367  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1227 20:15:57.324564  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1227 20:15:57.332196  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1227 20:15:57.335588  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1227 20:15:57.343125  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1227 20:15:57.346654  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1227 20:15:57.354254  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1227 20:15:57.357588  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1227 20:15:57.365565  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1227 20:15:57.369083  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1227 20:15:57.377616  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:15:57.394501  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:15:57.411297  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:15:57.428988  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:15:57.454933  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:15:57.477949  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:15:57.503718  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:15:57.527644  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:15:57.546021  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:15:57.562799  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:15:57.579794  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:15:57.596739  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1227 20:15:57.608968  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1227 20:15:57.621234  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1227 20:15:57.633283  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1227 20:15:57.645247  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1227 20:15:57.656994  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1227 20:15:57.668811  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (728 bytes)
	I1227 20:15:57.680824  337106 ssh_runner.go:195] Run: openssl version
	I1227 20:15:57.687264  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:15:57.694487  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:15:57.701580  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:15:57.705288  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:15:57.705345  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:15:57.746792  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:15:57.754009  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:15:57.760822  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:15:57.767703  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:15:57.771201  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:15:57.771305  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:15:57.813599  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:15:57.821036  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:57.828245  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:15:57.835688  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:57.839528  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:57.839640  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:57.880298  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:15:57.887708  337106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:15:57.891264  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:15:57.931649  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:15:57.972880  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:15:58.015739  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:15:58.057920  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:15:58.099308  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:15:58.140147  337106 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.35.0 crio true true} ...
	I1227 20:15:58.140265  337106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:15:58.140313  337106 kube-vip.go:115] generating kube-vip config ...
	I1227 20:15:58.140373  337106 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 20:15:58.151945  337106 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:15:58.152003  337106 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 20:15:58.152075  337106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:15:58.159193  337106 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:15:58.159305  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1227 20:15:58.166464  337106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 20:15:58.178769  337106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:15:58.190381  337106 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 20:15:58.202642  337106 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:15:58.206198  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:15:58.215567  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:15:58.331455  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:15:58.345573  337106 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:15:58.345907  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:58.350455  337106 out.go:179] * Verifying Kubernetes components...
	I1227 20:15:58.353287  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:15:58.476026  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:15:58.491956  337106 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1227 20:15:58.492036  337106 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1227 20:15:58.492360  337106 node_ready.go:35] waiting up to 6m0s for node "ha-422549-m02" to be "Ready" ...
	W1227 20:16:08.493659  337106 node_ready.go:55] error getting node "ha-422549-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422549-m02": net/http: TLS handshake timeout
	W1227 20:16:13.724508  337106 node_ready.go:57] node "ha-422549-m02" has "Ready":"Unknown" status (will retry)
	I1227 20:16:13.998074  337106 node_ready.go:49] node "ha-422549-m02" is "Ready"
	I1227 20:16:13.998104  337106 node_ready.go:38] duration metric: took 15.505718327s for node "ha-422549-m02" to be "Ready" ...
	I1227 20:16:13.998117  337106 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:16:13.998195  337106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:16:14.018969  337106 api_server.go:72] duration metric: took 15.673348785s to wait for apiserver process to appear ...
	I1227 20:16:14.019000  337106 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:16:14.019022  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:14.028770  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:16:14.028803  337106 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:16:14.519178  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:14.550966  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:16:14.551052  337106 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:16:15.019197  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:15.046385  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:16:15.046479  337106 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:16:15.519851  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:15.557956  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:16:15.558047  337106 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:16:16.019247  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:16.033187  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:16:16.033267  337106 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:16:16.519670  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:16.536800  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1227 20:16:16.539603  337106 api_server.go:141] control plane version: v1.35.0
	I1227 20:16:16.539669  337106 api_server.go:131] duration metric: took 2.52066052s to wait for apiserver health ...
	I1227 20:16:16.539693  337106 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:16:16.570231  337106 system_pods.go:59] 26 kube-system pods found
	I1227 20:16:16.570324  337106 system_pods.go:61] "coredns-7d764666f9-mf5xw" [5a7f58c2-f991-46f0-9ece-9a561d53d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:16.570350  337106 system_pods.go:61] "coredns-7d764666f9-n5d9d" [159febfd-c1e4-4897-a372-59e4a3069914] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:16.570386  337106 system_pods.go:61] "etcd-ha-422549" [8f26f563-e734-4add-aefe-484f0e873a1e] Running
	I1227 20:16:16.570414  337106 system_pods.go:61] "etcd-ha-422549-m02" [5fed7e48-07c4-4a07-b63b-0fccbd196f6f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:16:16.570435  337106 system_pods.go:61] "etcd-ha-422549-m03" [d22f78a1-2f4c-41e6-b65a-bf7108686c71] Running
	I1227 20:16:16.570460  337106 system_pods.go:61] "kindnet-28svl" [1494f795-941f-418e-8090-098225eb9c6a] Running
	I1227 20:16:16.570493  337106 system_pods.go:61] "kindnet-4hl7v" [ea2cc8a1-df16-440c-a093-a5d915b249b4] Running
	I1227 20:16:16.570521  337106 system_pods.go:61] "kindnet-5wczs" [df3d7298-4140-464f-a6e8-c614e1683488] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:16:16.570663  337106 system_pods.go:61] "kindnet-qkqmv" [66d834ae-af1b-456d-ae48-8a0d6608f961] Running
	I1227 20:16:16.570696  337106 system_pods.go:61] "kube-apiserver-ha-422549" [14f8e794-2ba7-477d-806b-03dd5a33d868] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:16:16.570721  337106 system_pods.go:61] "kube-apiserver-ha-422549-m02" [a4b97cc6-26ef-4d46-9ef9-bdee08eb89d6] Running
	I1227 20:16:16.570746  337106 system_pods.go:61] "kube-apiserver-ha-422549-m03" [71f23288-3e33-4bc8-9182-08c190ae026f] Running
	I1227 20:16:16.570787  337106 system_pods.go:61] "kube-controller-manager-ha-422549" [b69af60f-4eac-4e85-aa81-66b7616a46f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:16.570820  337106 system_pods.go:61] "kube-controller-manager-ha-422549-m02" [07c0e68f-76e5-4cee-92a2-05dd2fb4c3e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:16.570843  337106 system_pods.go:61] "kube-controller-manager-ha-422549-m03" [af291694-2986-455c-8588-c2879d10ff3b] Running
	I1227 20:16:16.570865  337106 system_pods.go:61] "kube-proxy-cg4z5" [42f74e61-eb67-4d02-8f08-f77f7163f5fc] Running
	I1227 20:16:16.570897  337106 system_pods.go:61] "kube-proxy-kscg6" [baa716d5-546a-4922-ba51-fe1116e36c75] Running
	I1227 20:16:16.570923  337106 system_pods.go:61] "kube-proxy-mhmmn" [d69029af-1fc4-4a31-913e-92e1231e845a] Running
	I1227 20:16:16.570948  337106 system_pods.go:61] "kube-proxy-nqr7h" [d0fc3ef5-765a-4376-94e6-42237908d3fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:16:16.570969  337106 system_pods.go:61] "kube-scheduler-ha-422549" [549e105d-d2e7-42b6-ae48-098d590e7b1d] Running
	I1227 20:16:16.571002  337106 system_pods.go:61] "kube-scheduler-ha-422549-m02" [db9187da-87a8-4b73-baea-76f3d9ef35c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:16:16.571026  337106 system_pods.go:61] "kube-scheduler-ha-422549-m03" [2a6b70b3-5303-404f-8b1d-1a65b9b81555] Running
	I1227 20:16:16.571044  337106 system_pods.go:61] "kube-vip-ha-422549" [32d647ce-90ed-4f56-b4c8-7ed445019d88] Running
	I1227 20:16:16.571067  337106 system_pods.go:61] "kube-vip-ha-422549-m02" [ddde9374-24b7-498d-b829-6902c612b272] Running
	I1227 20:16:16.571109  337106 system_pods.go:61] "kube-vip-ha-422549-m03" [39a60c56-1bf0-4232-9af0-f55e0c66a33d] Running
	I1227 20:16:16.571136  337106 system_pods.go:61] "storage-provisioner" [0d645eab-223f-4dd6-9518-6ab4a21d4c09] Running
	I1227 20:16:16.571156  337106 system_pods.go:74] duration metric: took 31.434553ms to wait for pod list to return data ...
	I1227 20:16:16.571179  337106 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:16:16.590199  337106 default_sa.go:45] found service account: "default"
	I1227 20:16:16.590265  337106 default_sa.go:55] duration metric: took 19.064027ms for default service account to be created ...
	I1227 20:16:16.590290  337106 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:16:16.623079  337106 system_pods.go:86] 26 kube-system pods found
	I1227 20:16:16.623169  337106 system_pods.go:89] "coredns-7d764666f9-mf5xw" [5a7f58c2-f991-46f0-9ece-9a561d53d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:16.623195  337106 system_pods.go:89] "coredns-7d764666f9-n5d9d" [159febfd-c1e4-4897-a372-59e4a3069914] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:16.623234  337106 system_pods.go:89] "etcd-ha-422549" [8f26f563-e734-4add-aefe-484f0e873a1e] Running
	I1227 20:16:16.623263  337106 system_pods.go:89] "etcd-ha-422549-m02" [5fed7e48-07c4-4a07-b63b-0fccbd196f6f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:16:16.623283  337106 system_pods.go:89] "etcd-ha-422549-m03" [d22f78a1-2f4c-41e6-b65a-bf7108686c71] Running
	I1227 20:16:16.623303  337106 system_pods.go:89] "kindnet-28svl" [1494f795-941f-418e-8090-098225eb9c6a] Running
	I1227 20:16:16.623335  337106 system_pods.go:89] "kindnet-4hl7v" [ea2cc8a1-df16-440c-a093-a5d915b249b4] Running
	I1227 20:16:16.623362  337106 system_pods.go:89] "kindnet-5wczs" [df3d7298-4140-464f-a6e8-c614e1683488] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:16:16.623385  337106 system_pods.go:89] "kindnet-qkqmv" [66d834ae-af1b-456d-ae48-8a0d6608f961] Running
	I1227 20:16:16.623411  337106 system_pods.go:89] "kube-apiserver-ha-422549" [14f8e794-2ba7-477d-806b-03dd5a33d868] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:16:16.623447  337106 system_pods.go:89] "kube-apiserver-ha-422549-m02" [a4b97cc6-26ef-4d46-9ef9-bdee08eb89d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:16:16.623475  337106 system_pods.go:89] "kube-apiserver-ha-422549-m03" [71f23288-3e33-4bc8-9182-08c190ae026f] Running
	I1227 20:16:16.623501  337106 system_pods.go:89] "kube-controller-manager-ha-422549" [b69af60f-4eac-4e85-aa81-66b7616a46f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:16.623525  337106 system_pods.go:89] "kube-controller-manager-ha-422549-m02" [07c0e68f-76e5-4cee-92a2-05dd2fb4c3e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:16.623557  337106 system_pods.go:89] "kube-controller-manager-ha-422549-m03" [af291694-2986-455c-8588-c2879d10ff3b] Running
	I1227 20:16:16.623583  337106 system_pods.go:89] "kube-proxy-cg4z5" [42f74e61-eb67-4d02-8f08-f77f7163f5fc] Running
	I1227 20:16:16.623607  337106 system_pods.go:89] "kube-proxy-kscg6" [baa716d5-546a-4922-ba51-fe1116e36c75] Running
	I1227 20:16:16.623632  337106 system_pods.go:89] "kube-proxy-mhmmn" [d69029af-1fc4-4a31-913e-92e1231e845a] Running
	I1227 20:16:16.623664  337106 system_pods.go:89] "kube-proxy-nqr7h" [d0fc3ef5-765a-4376-94e6-42237908d3fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:16:16.623690  337106 system_pods.go:89] "kube-scheduler-ha-422549" [549e105d-d2e7-42b6-ae48-098d590e7b1d] Running
	I1227 20:16:16.623713  337106 system_pods.go:89] "kube-scheduler-ha-422549-m02" [db9187da-87a8-4b73-baea-76f3d9ef35c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:16:16.623737  337106 system_pods.go:89] "kube-scheduler-ha-422549-m03" [2a6b70b3-5303-404f-8b1d-1a65b9b81555] Running
	I1227 20:16:16.623769  337106 system_pods.go:89] "kube-vip-ha-422549" [32d647ce-90ed-4f56-b4c8-7ed445019d88] Running
	I1227 20:16:16.623794  337106 system_pods.go:89] "kube-vip-ha-422549-m02" [ddde9374-24b7-498d-b829-6902c612b272] Running
	I1227 20:16:16.623818  337106 system_pods.go:89] "kube-vip-ha-422549-m03" [39a60c56-1bf0-4232-9af0-f55e0c66a33d] Running
	I1227 20:16:16.623842  337106 system_pods.go:89] "storage-provisioner" [0d645eab-223f-4dd6-9518-6ab4a21d4c09] Running
	I1227 20:16:16.623877  337106 system_pods.go:126] duration metric: took 33.567641ms to wait for k8s-apps to be running ...
	I1227 20:16:16.623905  337106 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:16:16.623994  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:16:16.670311  337106 system_svc.go:56] duration metric: took 46.39668ms WaitForService to wait for kubelet
	I1227 20:16:16.670384  337106 kubeadm.go:587] duration metric: took 18.324769156s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:16:16.670417  337106 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:16:16.708894  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:16.708992  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:16.709018  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:16.709039  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:16.709068  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:16.709094  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:16.709113  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:16.709132  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:16.709151  337106 node_conditions.go:105] duration metric: took 38.715442ms to run NodePressure ...
	I1227 20:16:16.709184  337106 start.go:242] waiting for startup goroutines ...
	I1227 20:16:16.709228  337106 start.go:256] writing updated cluster config ...
	I1227 20:16:16.713916  337106 out.go:203] 
	I1227 20:16:16.723292  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:16.723425  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:16.727142  337106 out.go:179] * Starting "ha-422549-m03" control-plane node in "ha-422549" cluster
	I1227 20:16:16.732478  337106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:16:16.735844  337106 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:16:16.739409  337106 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:16:16.739458  337106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:16:16.739659  337106 cache.go:65] Caching tarball of preloaded images
	I1227 20:16:16.739753  337106 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:16:16.739768  337106 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:16:16.739908  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:16.767918  337106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:16:16.767942  337106 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:16:16.767957  337106 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:16:16.767980  337106 start.go:360] acquireMachinesLock for ha-422549-m03: {Name:mkf062d56fcf026ae5cb73bd2d2d3016f0f6c481 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:16:16.768043  337106 start.go:364] duration metric: took 41.697µs to acquireMachinesLock for "ha-422549-m03"
	I1227 20:16:16.768068  337106 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:16:16.768074  337106 fix.go:54] fixHost starting: m03
	I1227 20:16:16.768352  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m03 --format={{.State.Status}}
	I1227 20:16:16.790621  337106 fix.go:112] recreateIfNeeded on ha-422549-m03: state=Stopped err=<nil>
	W1227 20:16:16.790653  337106 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:16:16.794891  337106 out.go:252] * Restarting existing docker container for "ha-422549-m03" ...
	I1227 20:16:16.794974  337106 cli_runner.go:164] Run: docker start ha-422549-m03
	I1227 20:16:17.149956  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m03 --format={{.State.Status}}
	I1227 20:16:17.174958  337106 kic.go:430] container "ha-422549-m03" state is running.
	I1227 20:16:17.175307  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m03
	I1227 20:16:17.213633  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:17.213863  337106 machine.go:94] provisionDockerMachine start ...
	I1227 20:16:17.213929  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:17.241742  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:17.242041  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1227 20:16:17.242056  337106 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:16:17.242635  337106 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 20:16:20.405227  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m03
	
	I1227 20:16:20.405265  337106 ubuntu.go:182] provisioning hostname "ha-422549-m03"
	I1227 20:16:20.405335  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:20.447382  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:20.447685  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1227 20:16:20.447702  337106 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549-m03 && echo "ha-422549-m03" | sudo tee /etc/hostname
	I1227 20:16:20.641581  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m03
	
	I1227 20:16:20.641669  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:20.671096  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:20.671417  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1227 20:16:20.671491  337106 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:16:20.825909  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:16:20.825934  337106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:16:20.825963  337106 ubuntu.go:190] setting up certificates
	I1227 20:16:20.825973  337106 provision.go:84] configureAuth start
	I1227 20:16:20.826043  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m03
	I1227 20:16:20.848683  337106 provision.go:143] copyHostCerts
	I1227 20:16:20.848722  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:16:20.848751  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:16:20.848757  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:16:20.848829  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:16:20.848936  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:16:20.848954  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:16:20.848959  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:16:20.848987  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:16:20.849035  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:16:20.849051  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:16:20.849055  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:16:20.849079  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:16:20.849139  337106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549-m03 san=[127.0.0.1 192.168.49.4 ha-422549-m03 localhost minikube]
	I1227 20:16:20.958713  337106 provision.go:177] copyRemoteCerts
	I1227 20:16:20.958777  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:16:20.958919  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:20.978456  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m03/id_rsa Username:docker}
	I1227 20:16:21.097778  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:16:21.097855  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:16:21.118223  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:16:21.118280  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 20:16:21.171526  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:16:21.171643  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:16:21.238272  337106 provision.go:87] duration metric: took 412.285774ms to configureAuth
	I1227 20:16:21.238317  337106 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:16:21.238586  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:21.238711  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:21.261112  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:21.261428  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1227 20:16:21.261479  337106 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:16:22.736503  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:16:22.736545  337106 machine.go:97] duration metric: took 5.522665605s to provisionDockerMachine
	I1227 20:16:22.736559  337106 start.go:293] postStartSetup for "ha-422549-m03" (driver="docker")
	I1227 20:16:22.736569  337106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:16:22.736631  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:16:22.736681  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:22.757560  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m03/id_rsa Username:docker}
	I1227 20:16:22.872943  337106 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:16:22.877107  337106 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:16:22.877150  337106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:16:22.877162  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:16:22.877224  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:16:22.877310  337106 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:16:22.877323  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:16:22.877568  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:16:22.887508  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:16:22.935543  337106 start.go:296] duration metric: took 198.968452ms for postStartSetup
	I1227 20:16:22.935675  337106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:16:22.935751  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:22.962394  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m03/id_rsa Username:docker}
	I1227 20:16:23.086315  337106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:16:23.098060  337106 fix.go:56] duration metric: took 6.329978316s for fixHost
	I1227 20:16:23.098095  337106 start.go:83] releasing machines lock for "ha-422549-m03", held for 6.330038441s
	I1227 20:16:23.098169  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m03
	I1227 20:16:23.127385  337106 out.go:179] * Found network options:
	I1227 20:16:23.130521  337106 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1227 20:16:23.133556  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:23.133603  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:23.133636  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:23.133648  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 20:16:23.133723  337106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:16:23.133754  337106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:16:23.133766  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:23.133843  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:23.174788  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m03/id_rsa Username:docker}
	I1227 20:16:23.176337  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m03/id_rsa Username:docker}
	I1227 20:16:23.532310  337106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:16:23.539423  337106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:16:23.539508  337106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:16:23.547781  337106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:16:23.547805  337106 start.go:496] detecting cgroup driver to use...
	I1227 20:16:23.547836  337106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:16:23.547889  337106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:16:23.564242  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:16:23.579653  337106 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:16:23.579767  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:16:23.598176  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:16:23.613182  337106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:16:23.877595  337106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:16:24.169571  337106 docker.go:234] disabling docker service ...
	I1227 20:16:24.169685  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:16:24.197205  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:16:24.211488  337106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:16:24.466324  337106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:16:24.716660  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:16:24.734029  337106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:16:24.758554  337106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:16:24.758647  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.777034  337106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:16:24.777106  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.791147  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.805710  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.818822  337106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:16:24.828018  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.843848  337106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.852557  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.865822  337106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:16:24.881844  337106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:16:24.890467  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:25.116336  337106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:16:26.436202  337106 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.319834137s)
	I1227 20:16:26.436227  337106 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:16:26.436285  337106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:16:26.440409  337106 start.go:574] Will wait 60s for crictl version
	I1227 20:16:26.440474  337106 ssh_runner.go:195] Run: which crictl
	I1227 20:16:26.444800  337106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:16:26.475048  337106 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:16:26.475137  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:16:26.509827  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:16:26.549254  337106 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:16:26.552189  337106 out.go:179]   - env NO_PROXY=192.168.49.2
	I1227 20:16:26.555166  337106 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1227 20:16:26.558176  337106 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:16:26.575734  337106 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:16:26.580184  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:16:26.590410  337106 mustload.go:66] Loading cluster: ha-422549
	I1227 20:16:26.590667  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:26.590918  337106 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:16:26.608326  337106 host.go:66] Checking if "ha-422549" exists ...
	I1227 20:16:26.608672  337106 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.4
	I1227 20:16:26.608684  337106 certs.go:195] generating shared ca certs ...
	I1227 20:16:26.608708  337106 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:16:26.608822  337106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:16:26.608870  337106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:16:26.608877  337106 certs.go:257] generating profile certs ...
	I1227 20:16:26.608966  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key
	I1227 20:16:26.609032  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.d8cf7377
	I1227 20:16:26.609078  337106 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key
	I1227 20:16:26.609087  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:16:26.609099  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:16:26.609109  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:16:26.609121  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:16:26.609131  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:16:26.609142  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:16:26.609153  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:16:26.609163  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:16:26.609238  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:16:26.609270  337106 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:16:26.609278  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:16:26.609540  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:16:26.609594  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:16:26.609622  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:16:26.609673  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:16:26.609705  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:26.609718  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:16:26.609729  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:16:26.609784  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:16:26.627281  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:16:26.717750  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1227 20:16:26.722194  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1227 20:16:26.732379  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1227 20:16:26.736107  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1227 20:16:26.744795  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1227 20:16:26.748608  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1227 20:16:26.757298  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1227 20:16:26.760963  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1227 20:16:26.770282  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1227 20:16:26.774405  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1227 20:16:26.782912  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1227 20:16:26.787280  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1227 20:16:26.796054  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:16:26.815746  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:16:26.833735  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:16:26.852956  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:16:26.873558  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:16:26.893781  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:16:26.912114  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:16:26.930067  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:16:26.954144  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:16:26.992095  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:16:27.032398  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:16:27.058957  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1227 20:16:27.082646  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1227 20:16:27.099055  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1227 20:16:27.114942  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1227 20:16:27.128524  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1227 20:16:27.143949  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1227 20:16:27.166895  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (728 bytes)
	I1227 20:16:27.189731  337106 ssh_runner.go:195] Run: openssl version
	I1227 20:16:27.199330  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:27.207176  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:16:27.215001  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:27.218816  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:27.218944  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:27.262656  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:16:27.270122  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:16:27.278066  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:16:27.286224  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:16:27.290216  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:16:27.290299  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:16:27.331583  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:16:27.339149  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:16:27.347443  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:16:27.354941  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:16:27.358541  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:16:27.358644  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:16:27.401369  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:16:27.408555  337106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:16:27.412327  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:16:27.452918  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:16:27.493668  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:16:27.534423  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:16:27.575645  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:16:27.617601  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:16:27.658239  337106 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.35.0 crio true true} ...
	I1227 20:16:27.658389  337106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:16:27.658424  337106 kube-vip.go:115] generating kube-vip config ...
	I1227 20:16:27.658480  337106 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 20:16:27.670482  337106 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:16:27.670542  337106 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 20:16:27.670611  337106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:16:27.678382  337106 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:16:27.678493  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1227 20:16:27.688057  337106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 20:16:27.702120  337106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:16:27.721182  337106 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 20:16:27.736629  337106 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:16:27.740129  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:16:27.750576  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:27.920085  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:16:27.936290  337106 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:16:27.936639  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:27.941595  337106 out.go:179] * Verifying Kubernetes components...
	I1227 20:16:27.944502  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:28.098929  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:16:28.115947  337106 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1227 20:16:28.116063  337106 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1227 20:16:28.116301  337106 node_ready.go:35] waiting up to 6m0s for node "ha-422549-m03" to be "Ready" ...
	W1227 20:16:30.121347  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	W1227 20:16:32.620007  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	W1227 20:16:34.620221  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	W1227 20:16:36.620631  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	W1227 20:16:38.620914  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	W1227 20:16:41.119914  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	I1227 20:16:42.138199  337106 node_ready.go:49] node "ha-422549-m03" is "Ready"
	I1227 20:16:42.138234  337106 node_ready.go:38] duration metric: took 14.021894093s for node "ha-422549-m03" to be "Ready" ...
	I1227 20:16:42.138250  337106 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:16:42.138320  337106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:16:42.201875  337106 api_server.go:72] duration metric: took 14.265538166s to wait for apiserver process to appear ...
	I1227 20:16:42.201905  337106 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:16:42.201928  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:42.211305  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1227 20:16:42.217811  337106 api_server.go:141] control plane version: v1.35.0
	I1227 20:16:42.217842  337106 api_server.go:131] duration metric: took 15.928834ms to wait for apiserver health ...
	I1227 20:16:42.217852  337106 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:16:42.235518  337106 system_pods.go:59] 26 kube-system pods found
	I1227 20:16:42.235637  337106 system_pods.go:61] "coredns-7d764666f9-mf5xw" [5a7f58c2-f991-46f0-9ece-9a561d53d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:42.235688  337106 system_pods.go:61] "coredns-7d764666f9-n5d9d" [159febfd-c1e4-4897-a372-59e4a3069914] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:42.235725  337106 system_pods.go:61] "etcd-ha-422549" [8f26f563-e734-4add-aefe-484f0e873a1e] Running
	I1227 20:16:42.235747  337106 system_pods.go:61] "etcd-ha-422549-m02" [5fed7e48-07c4-4a07-b63b-0fccbd196f6f] Running
	I1227 20:16:42.235772  337106 system_pods.go:61] "etcd-ha-422549-m03" [d22f78a1-2f4c-41e6-b65a-bf7108686c71] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:16:42.235810  337106 system_pods.go:61] "kindnet-28svl" [1494f795-941f-418e-8090-098225eb9c6a] Running
	I1227 20:16:42.235843  337106 system_pods.go:61] "kindnet-4hl7v" [ea2cc8a1-df16-440c-a093-a5d915b249b4] Running
	I1227 20:16:42.235869  337106 system_pods.go:61] "kindnet-5wczs" [df3d7298-4140-464f-a6e8-c614e1683488] Running
	I1227 20:16:42.235899  337106 system_pods.go:61] "kindnet-qkqmv" [66d834ae-af1b-456d-ae48-8a0d6608f961] Running
	I1227 20:16:42.235929  337106 system_pods.go:61] "kube-apiserver-ha-422549" [14f8e794-2ba7-477d-806b-03dd5a33d868] Running
	I1227 20:16:42.235961  337106 system_pods.go:61] "kube-apiserver-ha-422549-m02" [a4b97cc6-26ef-4d46-9ef9-bdee08eb89d6] Running
	I1227 20:16:42.235997  337106 system_pods.go:61] "kube-apiserver-ha-422549-m03" [71f23288-3e33-4bc8-9182-08c190ae026f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:16:42.236045  337106 system_pods.go:61] "kube-controller-manager-ha-422549" [b69af60f-4eac-4e85-aa81-66b7616a46f6] Running
	I1227 20:16:42.236083  337106 system_pods.go:61] "kube-controller-manager-ha-422549-m02" [07c0e68f-76e5-4cee-92a2-05dd2fb4c3e2] Running
	I1227 20:16:42.236112  337106 system_pods.go:61] "kube-controller-manager-ha-422549-m03" [af291694-2986-455c-8588-c2879d10ff3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:42.236140  337106 system_pods.go:61] "kube-proxy-cg4z5" [42f74e61-eb67-4d02-8f08-f77f7163f5fc] Running
	I1227 20:16:42.236179  337106 system_pods.go:61] "kube-proxy-kscg6" [baa716d5-546a-4922-ba51-fe1116e36c75] Running
	I1227 20:16:42.236206  337106 system_pods.go:61] "kube-proxy-mhmmn" [d69029af-1fc4-4a31-913e-92e1231e845a] Running
	I1227 20:16:42.236231  337106 system_pods.go:61] "kube-proxy-nqr7h" [d0fc3ef5-765a-4376-94e6-42237908d3fd] Running
	I1227 20:16:42.236262  337106 system_pods.go:61] "kube-scheduler-ha-422549" [549e105d-d2e7-42b6-ae48-098d590e7b1d] Running
	I1227 20:16:42.236297  337106 system_pods.go:61] "kube-scheduler-ha-422549-m02" [db9187da-87a8-4b73-baea-76f3d9ef35c7] Running
	I1227 20:16:42.236326  337106 system_pods.go:61] "kube-scheduler-ha-422549-m03" [2a6b70b3-5303-404f-8b1d-1a65b9b81555] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:16:42.236352  337106 system_pods.go:61] "kube-vip-ha-422549" [32d647ce-90ed-4f56-b4c8-7ed445019d88] Running
	I1227 20:16:42.236391  337106 system_pods.go:61] "kube-vip-ha-422549-m02" [ddde9374-24b7-498d-b829-6902c612b272] Running
	I1227 20:16:42.236414  337106 system_pods.go:61] "kube-vip-ha-422549-m03" [39a60c56-1bf0-4232-9af0-f55e0c66a33d] Running
	I1227 20:16:42.236441  337106 system_pods.go:61] "storage-provisioner" [0d645eab-223f-4dd6-9518-6ab4a21d4c09] Running
	I1227 20:16:42.236483  337106 system_pods.go:74] duration metric: took 18.617239ms to wait for pod list to return data ...
	I1227 20:16:42.236522  337106 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:16:42.247926  337106 default_sa.go:45] found service account: "default"
	I1227 20:16:42.248004  337106 default_sa.go:55] duration metric: took 11.459641ms for default service account to be created ...
	I1227 20:16:42.248030  337106 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:16:42.261989  337106 system_pods.go:86] 26 kube-system pods found
	I1227 20:16:42.262126  337106 system_pods.go:89] "coredns-7d764666f9-mf5xw" [5a7f58c2-f991-46f0-9ece-9a561d53d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:42.262177  337106 system_pods.go:89] "coredns-7d764666f9-n5d9d" [159febfd-c1e4-4897-a372-59e4a3069914] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:42.262207  337106 system_pods.go:89] "etcd-ha-422549" [8f26f563-e734-4add-aefe-484f0e873a1e] Running
	I1227 20:16:42.262236  337106 system_pods.go:89] "etcd-ha-422549-m02" [5fed7e48-07c4-4a07-b63b-0fccbd196f6f] Running
	I1227 20:16:42.262283  337106 system_pods.go:89] "etcd-ha-422549-m03" [d22f78a1-2f4c-41e6-b65a-bf7108686c71] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:16:42.262312  337106 system_pods.go:89] "kindnet-28svl" [1494f795-941f-418e-8090-098225eb9c6a] Running
	I1227 20:16:42.262338  337106 system_pods.go:89] "kindnet-4hl7v" [ea2cc8a1-df16-440c-a093-a5d915b249b4] Running
	I1227 20:16:42.262359  337106 system_pods.go:89] "kindnet-5wczs" [df3d7298-4140-464f-a6e8-c614e1683488] Running
	I1227 20:16:42.262394  337106 system_pods.go:89] "kindnet-qkqmv" [66d834ae-af1b-456d-ae48-8a0d6608f961] Running
	I1227 20:16:42.262426  337106 system_pods.go:89] "kube-apiserver-ha-422549" [14f8e794-2ba7-477d-806b-03dd5a33d868] Running
	I1227 20:16:42.262449  337106 system_pods.go:89] "kube-apiserver-ha-422549-m02" [a4b97cc6-26ef-4d46-9ef9-bdee08eb89d6] Running
	I1227 20:16:42.262479  337106 system_pods.go:89] "kube-apiserver-ha-422549-m03" [71f23288-3e33-4bc8-9182-08c190ae026f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:16:42.262522  337106 system_pods.go:89] "kube-controller-manager-ha-422549" [b69af60f-4eac-4e85-aa81-66b7616a46f6] Running
	I1227 20:16:42.262568  337106 system_pods.go:89] "kube-controller-manager-ha-422549-m02" [07c0e68f-76e5-4cee-92a2-05dd2fb4c3e2] Running
	I1227 20:16:42.262604  337106 system_pods.go:89] "kube-controller-manager-ha-422549-m03" [af291694-2986-455c-8588-c2879d10ff3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:42.262654  337106 system_pods.go:89] "kube-proxy-cg4z5" [42f74e61-eb67-4d02-8f08-f77f7163f5fc] Running
	I1227 20:16:42.262691  337106 system_pods.go:89] "kube-proxy-kscg6" [baa716d5-546a-4922-ba51-fe1116e36c75] Running
	I1227 20:16:42.262719  337106 system_pods.go:89] "kube-proxy-mhmmn" [d69029af-1fc4-4a31-913e-92e1231e845a] Running
	I1227 20:16:42.262764  337106 system_pods.go:89] "kube-proxy-nqr7h" [d0fc3ef5-765a-4376-94e6-42237908d3fd] Running
	I1227 20:16:42.262793  337106 system_pods.go:89] "kube-scheduler-ha-422549" [549e105d-d2e7-42b6-ae48-098d590e7b1d] Running
	I1227 20:16:42.262821  337106 system_pods.go:89] "kube-scheduler-ha-422549-m02" [db9187da-87a8-4b73-baea-76f3d9ef35c7] Running
	I1227 20:16:42.262867  337106 system_pods.go:89] "kube-scheduler-ha-422549-m03" [2a6b70b3-5303-404f-8b1d-1a65b9b81555] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:16:42.262896  337106 system_pods.go:89] "kube-vip-ha-422549" [32d647ce-90ed-4f56-b4c8-7ed445019d88] Running
	I1227 20:16:42.262923  337106 system_pods.go:89] "kube-vip-ha-422549-m02" [ddde9374-24b7-498d-b829-6902c612b272] Running
	I1227 20:16:42.262973  337106 system_pods.go:89] "kube-vip-ha-422549-m03" [39a60c56-1bf0-4232-9af0-f55e0c66a33d] Running
	I1227 20:16:42.263009  337106 system_pods.go:89] "storage-provisioner" [0d645eab-223f-4dd6-9518-6ab4a21d4c09] Running
	I1227 20:16:42.263038  337106 system_pods.go:126] duration metric: took 14.987495ms to wait for k8s-apps to be running ...
	I1227 20:16:42.263064  337106 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:16:42.263186  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:16:42.329952  337106 system_svc.go:56] duration metric: took 66.879518ms WaitForService to wait for kubelet
	I1227 20:16:42.330045  337106 kubeadm.go:587] duration metric: took 14.393713186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:16:42.330082  337106 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:16:42.334874  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:42.334956  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:42.334985  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:42.335008  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:42.335041  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:42.335069  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:42.335090  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:42.335112  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:42.335144  337106 node_conditions.go:105] duration metric: took 5.018461ms to run NodePressure ...
	I1227 20:16:42.335178  337106 start.go:242] waiting for startup goroutines ...
	I1227 20:16:42.335217  337106 start.go:256] writing updated cluster config ...
	I1227 20:16:42.338858  337106 out.go:203] 
	I1227 20:16:42.342208  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:42.342412  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:42.346339  337106 out.go:179] * Starting "ha-422549-m04" worker node in "ha-422549" cluster
	I1227 20:16:42.350180  337106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:16:42.353431  337106 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:16:42.356594  337106 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:16:42.356748  337106 cache.go:65] Caching tarball of preloaded images
	I1227 20:16:42.356702  337106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:16:42.357174  337106 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:16:42.357212  337106 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:16:42.357376  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:42.393103  337106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:16:42.393129  337106 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:16:42.393143  337106 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:16:42.393176  337106 start.go:360] acquireMachinesLock for ha-422549-m04: {Name:mk6b025464d8c3992b9046b379a06dcb477a1541 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:16:42.393245  337106 start.go:364] duration metric: took 45.324µs to acquireMachinesLock for "ha-422549-m04"
	I1227 20:16:42.393264  337106 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:16:42.393270  337106 fix.go:54] fixHost starting: m04
	I1227 20:16:42.393757  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m04 --format={{.State.Status}}
	I1227 20:16:42.411553  337106 fix.go:112] recreateIfNeeded on ha-422549-m04: state=Stopped err=<nil>
	W1227 20:16:42.411578  337106 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:16:42.414835  337106 out.go:252] * Restarting existing docker container for "ha-422549-m04" ...
	I1227 20:16:42.414929  337106 cli_runner.go:164] Run: docker start ha-422549-m04
	I1227 20:16:42.767967  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m04 --format={{.State.Status}}
	I1227 20:16:42.792044  337106 kic.go:430] container "ha-422549-m04" state is running.
	I1227 20:16:42.792404  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m04
	I1227 20:16:42.827351  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:42.827599  337106 machine.go:94] provisionDockerMachine start ...
	I1227 20:16:42.827669  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:42.865289  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:42.865636  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 20:16:42.865647  337106 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:16:42.866300  337106 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43686->127.0.0.1:33198: read: connection reset by peer
	I1227 20:16:46.033368  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m04
	
	I1227 20:16:46.033393  337106 ubuntu.go:182] provisioning hostname "ha-422549-m04"
	I1227 20:16:46.033521  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:46.061318  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:46.061712  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 20:16:46.061729  337106 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549-m04 && echo "ha-422549-m04" | sudo tee /etc/hostname
	I1227 20:16:46.247170  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m04
	
	I1227 20:16:46.247258  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:46.267833  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:46.268212  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 20:16:46.268238  337106 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:16:46.421793  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:16:46.421817  337106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:16:46.421834  337106 ubuntu.go:190] setting up certificates
	I1227 20:16:46.421844  337106 provision.go:84] configureAuth start
	I1227 20:16:46.421907  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m04
	I1227 20:16:46.450717  337106 provision.go:143] copyHostCerts
	I1227 20:16:46.450775  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:16:46.450808  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:16:46.450827  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:16:46.450912  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:16:46.450998  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:16:46.451024  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:16:46.451029  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:16:46.451060  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:16:46.451106  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:16:46.451128  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:16:46.451133  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:16:46.451165  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:16:46.451217  337106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549-m04 san=[127.0.0.1 192.168.49.5 ha-422549-m04 localhost minikube]
	I1227 20:16:46.849291  337106 provision.go:177] copyRemoteCerts
	I1227 20:16:46.849383  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:16:46.849466  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:46.871414  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m04/id_rsa Username:docker}
	I1227 20:16:46.969387  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:16:46.969501  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:16:46.998452  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:16:46.998518  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 20:16:47.021097  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:16:47.021160  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:16:47.040293  337106 provision.go:87] duration metric: took 618.436373ms to configureAuth
	I1227 20:16:47.040318  337106 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:16:47.040553  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:47.040650  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:47.060413  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:47.060713  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 20:16:47.060726  337106 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:16:47.416575  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:16:47.416595  337106 machine.go:97] duration metric: took 4.588981536s to provisionDockerMachine
	I1227 20:16:47.416607  337106 start.go:293] postStartSetup for "ha-422549-m04" (driver="docker")
	I1227 20:16:47.416618  337106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:16:47.416709  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:16:47.416753  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:47.436074  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m04/id_rsa Username:docker}
	I1227 20:16:47.541369  337106 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:16:47.545584  337106 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:16:47.545615  337106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:16:47.545627  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:16:47.545689  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:16:47.545788  337106 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:16:47.545802  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:16:47.545901  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:16:47.553680  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:16:47.574171  337106 start.go:296] duration metric: took 157.548886ms for postStartSetup
	I1227 20:16:47.574295  337106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:16:47.574343  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:47.591734  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m04/id_rsa Username:docker}
	I1227 20:16:47.691874  337106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:16:47.696839  337106 fix.go:56] duration metric: took 5.303562652s for fixHost
	I1227 20:16:47.696874  337106 start.go:83] releasing machines lock for "ha-422549-m04", held for 5.303620217s
	I1227 20:16:47.696941  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m04
	I1227 20:16:47.722974  337106 out.go:179] * Found network options:
	I1227 20:16:47.725907  337106 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1227 20:16:47.728701  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:47.728735  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:47.728747  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:47.728789  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:47.728805  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:47.728815  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 20:16:47.728903  337106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:16:47.728946  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:47.729221  337106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:16:47.729281  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:47.750771  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m04/id_rsa Username:docker}
	I1227 20:16:47.772821  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m04/id_rsa Username:docker}
	I1227 20:16:47.915331  337106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:16:47.990713  337106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:16:47.990795  337106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:16:48.000448  337106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:16:48.000481  337106 start.go:496] detecting cgroup driver to use...
	I1227 20:16:48.000514  337106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:16:48.000573  337106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:16:48.021384  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:16:48.039922  337106 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:16:48.040026  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:16:48.062813  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:16:48.079604  337106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:16:48.252416  337106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:16:48.379968  337106 docker.go:234] disabling docker service ...
	I1227 20:16:48.380079  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:16:48.396866  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:16:48.412804  337106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:16:48.580976  337106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:16:48.708477  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:16:48.723957  337106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:16:48.740271  337106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:16:48.740353  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.751954  337106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:16:48.752031  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.770376  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.788562  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.800161  337106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:16:48.809833  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.820365  337106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.838111  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.851461  337106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:16:48.859082  337106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:16:48.867125  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:49.040301  337106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:16:49.267978  337106 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:16:49.268078  337106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:16:49.275575  337106 start.go:574] Will wait 60s for crictl version
	I1227 20:16:49.275679  337106 ssh_runner.go:195] Run: which crictl
	I1227 20:16:49.281419  337106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:16:49.315494  337106 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:16:49.315644  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:16:49.369281  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:16:49.404637  337106 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:16:49.407552  337106 out.go:179]   - env NO_PROXY=192.168.49.2
	I1227 20:16:49.411293  337106 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1227 20:16:49.414211  337106 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1227 20:16:49.417170  337106 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:16:49.439158  337106 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:16:49.443392  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:16:49.460241  337106 mustload.go:66] Loading cluster: ha-422549
	I1227 20:16:49.460498  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:49.460747  337106 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:16:49.491043  337106 host.go:66] Checking if "ha-422549" exists ...
	I1227 20:16:49.491329  337106 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.5
	I1227 20:16:49.491337  337106 certs.go:195] generating shared ca certs ...
	I1227 20:16:49.491350  337106 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:16:49.491459  337106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:16:49.491497  337106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:16:49.491508  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:16:49.491519  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:16:49.491530  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:16:49.491540  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:16:49.491593  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:16:49.491624  337106 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:16:49.491632  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:16:49.491659  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:16:49.491683  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:16:49.491705  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:16:49.491748  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:16:49.491776  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:16:49.491789  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:49.491812  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:16:49.491829  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:16:49.515784  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:16:49.544429  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:16:49.565837  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:16:49.591774  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:16:49.613222  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:16:49.642392  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:16:49.671654  337106 ssh_runner.go:195] Run: openssl version
	I1227 20:16:49.680550  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:16:49.689578  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:16:49.699039  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:16:49.704553  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:16:49.704616  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:16:49.749850  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:16:49.758256  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:49.766307  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:16:49.776970  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:49.780927  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:49.781029  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:49.822773  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:16:49.830459  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:16:49.838202  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:16:49.847286  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:16:49.851257  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:16:49.851323  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:16:49.895472  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:16:49.903822  337106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:16:49.907501  337106 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:16:49.907548  337106 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.35.0 crio false true} ...
	I1227 20:16:49.907686  337106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:16:49.907776  337106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:16:49.915527  337106 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:16:49.915638  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1227 20:16:49.923067  337106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 20:16:49.936470  337106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:16:49.951403  337106 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:16:49.955422  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:16:49.965541  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:50.111024  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:16:50.130778  337106 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1227 20:16:50.131217  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:50.136553  337106 out.go:179] * Verifying Kubernetes components...
	I1227 20:16:50.139597  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:50.312113  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:16:50.327943  337106 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1227 20:16:50.328030  337106 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1227 20:16:50.328306  337106 node_ready.go:35] waiting up to 6m0s for node "ha-422549-m04" to be "Ready" ...
	I1227 20:16:51.834080  337106 node_ready.go:49] node "ha-422549-m04" is "Ready"
	I1227 20:16:51.834112  337106 node_ready.go:38] duration metric: took 1.505787179s for node "ha-422549-m04" to be "Ready" ...
	I1227 20:16:51.834136  337106 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:16:51.834194  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:16:51.847783  337106 system_svc.go:56] duration metric: took 13.639755ms WaitForService to wait for kubelet
	I1227 20:16:51.847815  337106 kubeadm.go:587] duration metric: took 1.71699582s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:16:51.847835  337106 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:16:51.851110  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:51.851141  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:51.851154  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:51.851159  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:51.851164  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:51.851171  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:51.851174  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:51.851178  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:51.851184  337106 node_conditions.go:105] duration metric: took 3.342441ms to run NodePressure ...
	I1227 20:16:51.851198  337106 start.go:242] waiting for startup goroutines ...
	I1227 20:16:51.851223  337106 start.go:256] writing updated cluster config ...
	I1227 20:16:51.851550  337106 ssh_runner.go:195] Run: rm -f paused
	I1227 20:16:51.855763  337106 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:16:51.856293  337106 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 20:16:51.875834  337106 pod_ready.go:83] waiting for pod "coredns-7d764666f9-mf5xw" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 20:16:53.883849  337106 pod_ready.go:104] pod "coredns-7d764666f9-mf5xw" is not "Ready", error: <nil>
	W1227 20:16:56.461572  337106 pod_ready.go:104] pod "coredns-7d764666f9-mf5xw" is not "Ready", error: <nil>
	I1227 20:16:56.881855  337106 pod_ready.go:94] pod "coredns-7d764666f9-mf5xw" is "Ready"
	I1227 20:16:56.881886  337106 pod_ready.go:86] duration metric: took 5.006014091s for pod "coredns-7d764666f9-mf5xw" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.881896  337106 pod_ready.go:83] waiting for pod "coredns-7d764666f9-n5d9d" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.887788  337106 pod_ready.go:94] pod "coredns-7d764666f9-n5d9d" is "Ready"
	I1227 20:16:56.887818  337106 pod_ready.go:86] duration metric: took 5.91483ms for pod "coredns-7d764666f9-n5d9d" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.891258  337106 pod_ready.go:83] waiting for pod "etcd-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.898397  337106 pod_ready.go:94] pod "etcd-ha-422549" is "Ready"
	I1227 20:16:56.898437  337106 pod_ready.go:86] duration metric: took 7.137144ms for pod "etcd-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.898449  337106 pod_ready.go:83] waiting for pod "etcd-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.906314  337106 pod_ready.go:94] pod "etcd-ha-422549-m02" is "Ready"
	I1227 20:16:56.906341  337106 pod_ready.go:86] duration metric: took 7.885849ms for pod "etcd-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.906352  337106 pod_ready.go:83] waiting for pod "etcd-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:57.076308  337106 request.go:683] "Waited before sending request" delay="167.221744ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m03"
	I1227 20:16:57.080536  337106 pod_ready.go:94] pod "etcd-ha-422549-m03" is "Ready"
	I1227 20:16:57.080564  337106 pod_ready.go:86] duration metric: took 174.205244ms for pod "etcd-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:57.276888  337106 request.go:683] "Waited before sending request" delay="196.187905ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1227 20:16:57.280390  337106 pod_ready.go:83] waiting for pod "kube-apiserver-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:57.476826  337106 request.go:683] "Waited before sending request" delay="196.340204ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-422549"
	I1227 20:16:57.677055  337106 request.go:683] "Waited before sending request" delay="195.372363ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549"
	I1227 20:16:57.680148  337106 pod_ready.go:94] pod "kube-apiserver-ha-422549" is "Ready"
	I1227 20:16:57.680173  337106 pod_ready.go:86] duration metric: took 399.753981ms for pod "kube-apiserver-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:57.680183  337106 pod_ready.go:83] waiting for pod "kube-apiserver-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:57.876636  337106 request.go:683] "Waited before sending request" delay="196.366115ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-422549-m02"
	I1227 20:16:58.076883  337106 request.go:683] "Waited before sending request" delay="195.240889ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m02"
	I1227 20:16:58.081595  337106 pod_ready.go:94] pod "kube-apiserver-ha-422549-m02" is "Ready"
	I1227 20:16:58.081624  337106 pod_ready.go:86] duration metric: took 401.434113ms for pod "kube-apiserver-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:58.081636  337106 pod_ready.go:83] waiting for pod "kube-apiserver-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:58.277078  337106 request.go:683] "Waited before sending request" delay="195.329053ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-422549-m03"
	I1227 20:16:58.476156  337106 request.go:683] "Waited before sending request" delay="193.265737ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m03"
	I1227 20:16:58.479583  337106 pod_ready.go:94] pod "kube-apiserver-ha-422549-m03" is "Ready"
	I1227 20:16:58.479609  337106 pod_ready.go:86] duration metric: took 397.939042ms for pod "kube-apiserver-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:58.677038  337106 request.go:683] "Waited before sending request" delay="197.311256ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1227 20:16:58.680893  337106 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:58.876237  337106 request.go:683] "Waited before sending request" delay="195.249704ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-422549"
	I1227 20:16:59.076160  337106 request.go:683] "Waited before sending request" delay="194.26927ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549"
	I1227 20:16:59.079502  337106 pod_ready.go:94] pod "kube-controller-manager-ha-422549" is "Ready"
	I1227 20:16:59.079531  337106 pod_ready.go:86] duration metric: took 398.612222ms for pod "kube-controller-manager-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:59.079542  337106 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:59.276926  337106 request.go:683] "Waited before sending request" delay="197.310947ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-422549-m02"
	I1227 20:16:59.476987  337106 request.go:683] "Waited before sending request" delay="195.346795ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m02"
	I1227 20:16:59.480256  337106 pod_ready.go:94] pod "kube-controller-manager-ha-422549-m02" is "Ready"
	I1227 20:16:59.480288  337106 pod_ready.go:86] duration metric: took 400.738794ms for pod "kube-controller-manager-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:59.480298  337106 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:59.676709  337106 request.go:683] "Waited before sending request" delay="196.313782ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-422549-m03"
	I1227 20:16:59.876936  337106 request.go:683] "Waited before sending request" delay="194.422474ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m03"
	I1227 20:16:59.880871  337106 pod_ready.go:94] pod "kube-controller-manager-ha-422549-m03" is "Ready"
	I1227 20:16:59.880898  337106 pod_ready.go:86] duration metric: took 400.592723ms for pod "kube-controller-manager-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:00.077121  337106 request.go:683] "Waited before sending request" delay="196.103919ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1227 20:17:00.089664  337106 pod_ready.go:83] waiting for pod "kube-proxy-cg4z5" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:00.277067  337106 request.go:683] "Waited before sending request" delay="187.22976ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg4z5"
	I1227 20:17:00.476439  337106 request.go:683] "Waited before sending request" delay="191.18971ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m03"
	I1227 20:17:00.480835  337106 pod_ready.go:94] pod "kube-proxy-cg4z5" is "Ready"
	I1227 20:17:00.480892  337106 pod_ready.go:86] duration metric: took 391.133363ms for pod "kube-proxy-cg4z5" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:00.480907  337106 pod_ready.go:83] waiting for pod "kube-proxy-kscg6" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:00.676146  337106 request.go:683] "Waited before sending request" delay="195.116873ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kscg6"
	I1227 20:17:00.876152  337106 request.go:683] "Waited before sending request" delay="192.262917ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m04"
	I1227 20:17:00.881008  337106 pod_ready.go:94] pod "kube-proxy-kscg6" is "Ready"
	I1227 20:17:00.881038  337106 pod_ready.go:86] duration metric: took 400.122065ms for pod "kube-proxy-kscg6" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:00.881048  337106 pod_ready.go:83] waiting for pod "kube-proxy-mhmmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:01.076325  337106 request.go:683] "Waited before sending request" delay="195.195166ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mhmmn"
	I1227 20:17:01.276909  337106 request.go:683] "Waited before sending request" delay="195.293101ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549"
	I1227 20:17:01.280680  337106 pod_ready.go:94] pod "kube-proxy-mhmmn" is "Ready"
	I1227 20:17:01.280710  337106 pod_ready.go:86] duration metric: took 399.654071ms for pod "kube-proxy-mhmmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:01.280722  337106 pod_ready.go:83] waiting for pod "kube-proxy-nqr7h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:01.476964  337106 request.go:683] "Waited before sending request" delay="196.12986ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqr7h"
	I1227 20:17:01.676540  337106 request.go:683] "Waited before sending request" delay="192.49818ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m02"
	I1227 20:17:01.685668  337106 pod_ready.go:94] pod "kube-proxy-nqr7h" is "Ready"
	I1227 20:17:01.685702  337106 pod_ready.go:86] duration metric: took 404.972449ms for pod "kube-proxy-nqr7h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:01.876169  337106 request.go:683] "Waited before sending request" delay="190.319322ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1227 20:17:01.882184  337106 pod_ready.go:83] waiting for pod "kube-scheduler-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:02.076778  337106 request.go:683] "Waited before sending request" delay="194.39653ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-422549"
	I1227 20:17:02.277097  337106 request.go:683] "Waited before sending request" delay="189.264505ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549"
	I1227 20:17:02.281682  337106 pod_ready.go:94] pod "kube-scheduler-ha-422549" is "Ready"
	I1227 20:17:02.281718  337106 pod_ready.go:86] duration metric: took 399.422109ms for pod "kube-scheduler-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:02.281728  337106 pod_ready.go:83] waiting for pod "kube-scheduler-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:02.477021  337106 request.go:683] "Waited before sending request" delay="195.180295ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-422549-m02"
	I1227 20:17:02.676336  337106 request.go:683] "Waited before sending request" delay="193.224619ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m02"
	I1227 20:17:02.680037  337106 pod_ready.go:94] pod "kube-scheduler-ha-422549-m02" is "Ready"
	I1227 20:17:02.680112  337106 pod_ready.go:86] duration metric: took 398.375125ms for pod "kube-scheduler-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:02.680126  337106 pod_ready.go:83] waiting for pod "kube-scheduler-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:02.876405  337106 request.go:683] "Waited before sending request" delay="196.195019ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-422549-m03"
	I1227 20:17:03.076174  337106 request.go:683] "Waited before sending request" delay="195.233596ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m03"
	I1227 20:17:03.079768  337106 pod_ready.go:94] pod "kube-scheduler-ha-422549-m03" is "Ready"
	I1227 20:17:03.079800  337106 pod_ready.go:86] duration metric: took 399.666897ms for pod "kube-scheduler-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:03.079847  337106 pod_ready.go:40] duration metric: took 11.224018864s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:17:03.152145  337106 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 20:17:03.155161  337106 out.go:203] 
	W1227 20:17:03.158240  337106 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 20:17:03.161317  337106 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:17:03.164544  337106 out.go:179] * Done! kubectl is now configured to use "ha-422549" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:16:14 ha-422549 crio[669]: time="2025-12-27T20:16:14.963662144Z" level=info msg="Started container" PID=1165 containerID=e30e2fc201d45a408198fe1cf19728fccd5ebe17d0f5255f7589564c690889ec description=kube-system/kube-proxy-mhmmn/kube-proxy id=83f9017b-13c2-4c2b-927f-e22b6986096d name=/runtime.v1.RuntimeService/StartContainer sandboxID=6495c9a31e01c2f5ac17768f9f5e13a5423c5594fc2867804e3bb0a908221252
	Dec 27 20:16:45 ha-422549 conmon[1143]: conmon 7acd50dc5298fb99db44 <ninfo>: container 1152 exited with status 1
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.428315945Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f60cbd10-f7b2-4cd1-80a7-fccba0550911 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.43511179Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=994ad400-2597-4615-b648-cdef116922a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.438853907Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=52ecd850-72c2-4d8c-abb4-bcb68b155882 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.438953761Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.446454815Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.447683161Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/be0e461cabdcf17f5b8d1bb2222c3a204fd930be36abbb0859da36ab3d16462f/merged/etc/passwd: no such file or directory"
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.447776861Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/be0e461cabdcf17f5b8d1bb2222c3a204fd930be36abbb0859da36ab3d16462f/merged/etc/group: no such file or directory"
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.448117445Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.466884564Z" level=info msg="Created container 7361d14a41eae128627f7ec4143721dd6bb4d3ae719e332d08bda13887aca146: kube-system/storage-provisioner/storage-provisioner" id=52ecd850-72c2-4d8c-abb4-bcb68b155882 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.472967068Z" level=info msg="Starting container: 7361d14a41eae128627f7ec4143721dd6bb4d3ae719e332d08bda13887aca146" id=34f02e50-7595-4b71-82ea-dc48fe422b8c name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.475650188Z" level=info msg="Started container" PID=1422 containerID=7361d14a41eae128627f7ec4143721dd6bb4d3ae719e332d08bda13887aca146 description=kube-system/storage-provisioner/storage-provisioner id=34f02e50-7595-4b71-82ea-dc48fe422b8c name=/runtime.v1.RuntimeService/StartContainer sandboxID=735879ad1c236176f8b5399b57a79b6c0ab6195af5a05ee38eac2aa69480249f
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.268998026Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.274112141Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.274149957Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.274171495Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.277419129Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.277535811Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.27759697Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.281296488Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.281332581Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.281356277Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.285112877Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.28514943Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	7361d14a41eae       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Running             storage-provisioner       4                   735879ad1c236       storage-provisioner                 kube-system
	7879d1a6c6a98       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf   2 minutes ago        Running             coredns                   2                   bd06f2852a595       coredns-7d764666f9-mf5xw            kube-system
	0fb071b8bd6b6       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   2 minutes ago        Running             busybox                   2                   cf93f418a9a0a       busybox-769dd8b7dd-k7ks6            default
	7acd50dc5298f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   2 minutes ago        Exited              storage-provisioner       3                   735879ad1c236       storage-provisioner                 kube-system
	e30e2fc201d45       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   2 minutes ago        Running             kube-proxy                2                   6495c9a31e01c       kube-proxy-mhmmn                    kube-system
	595cf90732ea1       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf   2 minutes ago        Running             coredns                   2                   6e45d9e1ac155       coredns-7d764666f9-n5d9d            kube-system
	f4b4244b1db16       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   2 minutes ago        Running             kindnet-cni               2                   828118b404202       kindnet-qkqmv                       kube-system
	8a1b0b47a0ed1       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   2 minutes ago        Running             kube-controller-manager   7                   75a2af3dd93e9       kube-controller-manager-ha-422549   kube-system
	acdd287d4087f       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   2 minutes ago        Running             kube-scheduler            2                   ee19621eddf01       kube-scheduler-ha-422549            kube-system
	7c4ac1dbe59ad       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   2 minutes ago        Exited              kube-controller-manager   6                   75a2af3dd93e9       kube-controller-manager-ha-422549   kube-system
	6b0b91d1da0a4       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   2 minutes ago        Running             kube-apiserver            3                   025c49d6ec070       kube-apiserver-ha-422549            kube-system
	776b31832bd3b       28c5662932f6032ee4faba083d9c2af90232797e1d4f89d9892cb92b26fec299   2 minutes ago        Running             kube-vip                  1                   66af5fba1f89e       kube-vip-ha-422549                  kube-system
	97ce57129ce3b       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   2 minutes ago        Running             etcd                      2                   77b191af13e7e       etcd-ha-422549                      kube-system
	
	
	==> coredns [595cf90732ea108872ec4fb5764679f01619c8baa8a4aca8307dd9cb64a9120f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:35202 - 54427 "HINFO IN 8582221969168170305.1983723465531701443. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.038347152s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> coredns [7879d1a6c6a98b3b227de2b37ae12cd1a3492d804d3ec108fe982379de5ffd0c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:46822 - 1915 "HINFO IN 1020865313171851806.989409873494633985. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013088569s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-422549
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_03_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:03:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:18:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:17:57 +0000   Sat, 27 Dec 2025 20:03:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:17:57 +0000   Sat, 27 Dec 2025 20:03:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:17:57 +0000   Sat, 27 Dec 2025 20:03:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:17:57 +0000   Sat, 27 Dec 2025 20:09:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-422549
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                acd356f3-8732-454f-9ea5-4ebb90b80a04
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-k7ks6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7d764666f9-mf5xw             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 coredns-7d764666f9-n5d9d             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 etcd-ha-422549                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-qkqmv                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-422549             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-422549    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-mhmmn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-422549             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-422549                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m6s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  15m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  14m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  8m46s  node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  2m18s  node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  2m17s  node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  112s   node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  51s    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	
	
	Name:               ha-422549-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_04_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:04:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:18:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:18:05 +0000   Sat, 27 Dec 2025 20:16:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:18:05 +0000   Sat, 27 Dec 2025 20:16:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:18:05 +0000   Sat, 27 Dec 2025 20:16:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:18:05 +0000   Sat, 27 Dec 2025 20:16:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-422549-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                279e934d-6d34-4a11-83f0-a7f36011d6a2
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-v6vks                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-422549-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-5wczs                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-422549-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-422549-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-nqr7h                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-422549-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-422549-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  14m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  14m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  8m47s  node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  NodeNotReady    7m57s  node-controller  Node ha-422549-m02 status is now: NodeNotReady
	  Normal  RegisteredNode  2m19s  node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  2m18s  node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  113s   node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  52s    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	
	
	Name:               ha-422549-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_04_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:04:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:18:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:16:41 +0000   Sat, 27 Dec 2025 20:16:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:16:41 +0000   Sat, 27 Dec 2025 20:16:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:16:41 +0000   Sat, 27 Dec 2025 20:16:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:16:41 +0000   Sat, 27 Dec 2025 20:16:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-422549-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                dd826b6d-21ec-45c4-b392-2d4b9b2daddb
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-qcz4b                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-422549-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-28svl                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-422549-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-422549-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-cg4z5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-422549-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-422549-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  8m47s  node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  NodeNotReady    7m57s  node-controller  Node ha-422549-m03 status is now: NodeNotReady
	  Normal  RegisteredNode  2m19s  node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  2m18s  node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  113s   node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  52s    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	
	
	Name:               ha-422549-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_05_33_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:05:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:18:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:18:33 +0000   Sat, 27 Dec 2025 20:16:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:18:33 +0000   Sat, 27 Dec 2025 20:16:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:18:33 +0000   Sat, 27 Dec 2025 20:16:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:18:33 +0000   Sat, 27 Dec 2025 20:16:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-422549-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                45c0e480-898e-46d5-83ce-c457d7b4b021
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4hl7v       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-proxy-kscg6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  8m47s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  NodeNotReady    7m57s  node-controller  Node ha-422549-m04 status is now: NodeNotReady
	  Normal  RegisteredNode  2m19s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  2m18s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  113s   node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  52s    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	
	
	Name:               ha-422549-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_17_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:17:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m05
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:18:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:18:32 +0000   Sat, 27 Dec 2025 20:17:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:18:32 +0000   Sat, 27 Dec 2025 20:17:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:18:32 +0000   Sat, 27 Dec 2025 20:17:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:18:32 +0000   Sat, 27 Dec 2025 20:18:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-422549-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                c1c7de59-aebb-4531-b34d-d2fd7fb1d4ab
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-422549-m05                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         48s
	  kube-system                 kindnet-8jzbd                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      50s
	  kube-system                 kube-apiserver-ha-422549-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-controller-manager-ha-422549-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-proxy-5dh85                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-scheduler-ha-422549-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-vip-ha-422549-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  48s   node-controller  Node ha-422549-m05 event: Registered Node ha-422549-m05 in Controller
	  Normal  RegisteredNode  48s   node-controller  Node ha-422549-m05 event: Registered Node ha-422549-m05 in Controller
	  Normal  RegisteredNode  48s   node-controller  Node ha-422549-m05 event: Registered Node ha-422549-m05 in Controller
	  Normal  RegisteredNode  47s   node-controller  Node ha-422549-m05 event: Registered Node ha-422549-m05 in Controller
	
	
	==> dmesg <==
	[Dec27 19:28] overlayfs: idmapped layers are currently not supported
	[ +28.388596] overlayfs: idmapped layers are currently not supported
	[Dec27 19:29] overlayfs: idmapped layers are currently not supported
	[  +9.242530] overlayfs: idmapped layers are currently not supported
	[Dec27 19:30] overlayfs: idmapped layers are currently not supported
	[ +11.577339] overlayfs: idmapped layers are currently not supported
	[Dec27 19:32] overlayfs: idmapped layers are currently not supported
	[ +19.186532] overlayfs: idmapped layers are currently not supported
	[Dec27 19:34] overlayfs: idmapped layers are currently not supported
	[Dec27 19:54] kauditd_printk_skb: 8 callbacks suppressed
	[Dec27 19:56] overlayfs: idmapped layers are currently not supported
	[Dec27 19:59] overlayfs: idmapped layers are currently not supported
	[Dec27 20:00] overlayfs: idmapped layers are currently not supported
	[Dec27 20:03] overlayfs: idmapped layers are currently not supported
	[ +31.019083] overlayfs: idmapped layers are currently not supported
	[Dec27 20:04] overlayfs: idmapped layers are currently not supported
	[Dec27 20:05] overlayfs: idmapped layers are currently not supported
	[Dec27 20:06] overlayfs: idmapped layers are currently not supported
	[Dec27 20:07] overlayfs: idmapped layers are currently not supported
	[  +3.687478] overlayfs: idmapped layers are currently not supported
	[Dec27 20:15] overlayfs: idmapped layers are currently not supported
	[  +3.163851] overlayfs: idmapped layers are currently not supported
	[Dec27 20:16] overlayfs: idmapped layers are currently not supported
	[ +35.129102] overlayfs: idmapped layers are currently not supported
	[Dec27 20:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [97ce57129ce3bc803fd62d49e1f3f06d06aa64d93e2ef36f372084cbbd21e34a] <==
	{"level":"warn","ts":"2025-12-27T20:17:33.067093Z","caller":"embed/config_logging.go:194","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.6:58178","server-name":"","error":"read tcp 192.168.49.2:2379->192.168.49.6:58178: read: connection reset by peer"}
	{"level":"info","ts":"2025-12-27T20:17:33.070109Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(129412930287384796 10978419992923766050 12593026477526642892 13372017479021783969)"}
	{"level":"info","ts":"2025-12-27T20:17:33.070273Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"985b2ed141447d22"}
	{"level":"info","ts":"2025-12-27T20:17:33.070324Z","caller":"etcdserver/server.go:1768","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"985b2ed141447d22"}
	{"level":"warn","ts":"2025-12-27T20:17:33.073103Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"985b2ed141447d22","error":"EOF"}
	{"level":"info","ts":"2025-12-27T20:17:33.163842Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"985b2ed141447d22"}
	{"level":"info","ts":"2025-12-27T20:17:33.171972Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"985b2ed141447d22"}
	{"level":"warn","ts":"2025-12-27T20:17:33.270184Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"985b2ed141447d22","error":"failed to write 985b2ed141447d22 on stream Message (write tcp 192.168.49.2:2380->192.168.49.6:33198: write: broken pipe)"}
	{"level":"warn","ts":"2025-12-27T20:17:33.270416Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"985b2ed141447d22"}
	{"level":"info","ts":"2025-12-27T20:17:33.291423Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"985b2ed141447d22"}
	{"level":"info","ts":"2025-12-27T20:17:33.309344Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"985b2ed141447d22","stream-type":"stream Message"}
	{"level":"info","ts":"2025-12-27T20:17:33.309401Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"985b2ed141447d22"}
	{"level":"info","ts":"2025-12-27T20:17:33.310443Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"985b2ed141447d22","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-12-27T20:17:33.310487Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"985b2ed141447d22"}
	{"level":"info","ts":"2025-12-27T20:17:33.310499Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"985b2ed141447d22"}
	{"level":"info","ts":"2025-12-27T20:17:46.488500Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-27T20:17:46.565766Z","caller":"traceutil/trace.go:172","msg":"trace[933768759] transaction","detail":"{read_only:false; response_revision:3112; number_of_response:1; }","duration":"118.199627ms","start":"2025-12-27T20:17:46.447555Z","end":"2025-12-27T20:17:46.565755Z","steps":["trace[933768759] 'process raft request'  (duration: 93.692633ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:17:46.565969Z","caller":"traceutil/trace.go:172","msg":"trace[2018106619] transaction","detail":"{read_only:false; number_of_response:1; response_revision:3113; }","duration":"103.296339ms","start":"2025-12-27T20:17:46.462666Z","end":"2025-12-27T20:17:46.565962Z","steps":["trace[2018106619] 'process raft request'  (duration: 90.454834ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:17:46.566053Z","caller":"traceutil/trace.go:172","msg":"trace[448559542] transaction","detail":"{read_only:false; number_of_response:1; response_revision:3113; }","duration":"103.309648ms","start":"2025-12-27T20:17:46.462738Z","end":"2025-12-27T20:17:46.566048Z","steps":["trace[448559542] 'process raft request'  (duration: 90.419996ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:17:46.566891Z","caller":"traceutil/trace.go:172","msg":"trace[957148971] transaction","detail":"{read_only:false; number_of_response:1; response_revision:3113; }","duration":"102.737799ms","start":"2025-12-27T20:17:46.462774Z","end":"2025-12-27T20:17:46.565512Z","steps":["trace[957148971] 'process raft request'  (duration: 90.850047ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T20:17:46.641888Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"116.168202ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-jq272\" limit:1 ","response":"range_response_count:1 size:3431"}
	{"level":"info","ts":"2025-12-27T20:17:46.643075Z","caller":"traceutil/trace.go:172","msg":"trace[892669865] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-jq272; range_end:; response_count:1; response_revision:3119; }","duration":"117.360857ms","start":"2025-12-27T20:17:46.525697Z","end":"2025-12-27T20:17:46.643058Z","steps":["trace[892669865] 'agreement among raft nodes before linearized reading'  (duration: 115.484825ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:17:47.012232Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-27T20:17:51.433034Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-27T20:18:02.986607Z","caller":"etcdserver/server.go:1872","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"985b2ed141447d22","bytes":6293382,"size":"6.3 MB","took":"30.716112113s"}
	
	
	==> kernel <==
	 20:18:36 up  2:01,  0 user,  load average: 1.43, 1.33, 1.39
	Linux ha-422549 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f4b4244b1db16ca451154424e89d4d56ce2b826c6f69b1c1fa82f892e7966881] <==
	I1227 20:18:15.275525       1 main.go:301] handling current node
	I1227 20:18:15.275560       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1227 20:18:15.275594       1 main.go:324] Node ha-422549-m02 has CIDR [10.244.1.0/24] 
	I1227 20:18:15.284253       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1227 20:18:15.284355       1 main.go:324] Node ha-422549-m03 has CIDR [10.244.2.0/24] 
	I1227 20:18:25.274554       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 20:18:25.274667       1 main.go:301] handling current node
	I1227 20:18:25.274691       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1227 20:18:25.274706       1 main.go:324] Node ha-422549-m02 has CIDR [10.244.1.0/24] 
	I1227 20:18:25.274874       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1227 20:18:25.274887       1 main.go:324] Node ha-422549-m03 has CIDR [10.244.2.0/24] 
	I1227 20:18:25.274964       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1227 20:18:25.274976       1 main.go:324] Node ha-422549-m04 has CIDR [10.244.3.0/24] 
	I1227 20:18:25.275034       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1227 20:18:25.275045       1 main.go:324] Node ha-422549-m05 has CIDR [10.244.4.0/24] 
	I1227 20:18:35.267887       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1227 20:18:35.267924       1 main.go:324] Node ha-422549-m03 has CIDR [10.244.2.0/24] 
	I1227 20:18:35.268162       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1227 20:18:35.268176       1 main.go:324] Node ha-422549-m04 has CIDR [10.244.3.0/24] 
	I1227 20:18:35.268262       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1227 20:18:35.268281       1 main.go:324] Node ha-422549-m05 has CIDR [10.244.4.0/24] 
	I1227 20:18:35.268360       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 20:18:35.268373       1 main.go:301] handling current node
	I1227 20:18:35.268386       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1227 20:18:35.268391       1 main.go:324] Node ha-422549-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [6b0b91d1da0a4c385d0d3110ebc1d18efbc54bab7d6da6bba31c072f2fbd4da9] <==
	I1227 20:16:13.796413       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:13.797072       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 20:16:13.797074       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 20:16:13.797100       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:13.797777       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 20:16:13.797963       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 20:16:13.798046       1 aggregator.go:187] initial CRD sync complete...
	I1227 20:16:13.798090       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 20:16:13.798127       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:16:13.798158       1 cache.go:39] Caches are synced for autoregister controller
	E1227 20:16:13.804997       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 20:16:13.818967       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:13.818980       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 20:16:13.819043       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:16:13.824892       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:16:13.829882       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:16:13.856520       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:16:13.903885       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:16:14.353399       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1227 20:16:16.144077       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1227 20:16:16.145490       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:16:16.162091       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:16:17.856302       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:16:18.028352       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 20:16:18.100041       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [7c4ac1dbe59ad7d3143dfe74886a6bc3058bfad37ae864b855a6e47c1a4d984e] <==
	I1227 20:15:51.302678       1 serving.go:386] Generated self-signed cert in-memory
	I1227 20:15:51.319186       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1227 20:15:51.319285       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:15:51.320999       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1227 20:15:51.321146       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1227 20:15:51.321625       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1227 20:15:51.321698       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1227 20:16:13.577648       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [8a1b0b47a0ed1caecc63a10c0f1f9666bd9ee325c50ecf1f6c7e085c9598dbfa] <==
	I1227 20:16:17.634716       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.634834       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.634959       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.635096       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.635317       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.635492       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.635766       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.656398       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.659067       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.751050       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m02"
	I1227 20:16:17.752259       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m03"
	I1227 20:16:17.752315       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m04"
	I1227 20:16:17.752343       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549"
	I1227 20:16:17.820816       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.820838       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:16:17.820843       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:16:17.829110       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.887401       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 20:16:51.537342       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-422549-m04"
	E1227 20:17:45.611567       1 certificate_controller.go:158] "Unhandled Error" err="Sync csr-dwmks failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-dwmks\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1227 20:17:46.276633       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-422549-m05\" does not exist"
	I1227 20:17:46.277659       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-422549-m04"
	I1227 20:17:46.316698       1 range_allocator.go:433] "Set node PodCIDR" node="ha-422549-m05" podCIDRs=["10.244.4.0/24"]
	I1227 20:17:48.058474       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m05"
	I1227 20:18:32.783820       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-422549-m04"
	
	
	==> kube-proxy [e30e2fc201d45a408198fe1cf19728fccd5ebe17d0f5255f7589564c690889ec] <==
	I1227 20:16:15.717666       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:16:16.119519       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:16:16.241830       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:16.241930       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1227 20:16:16.242046       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:16:16.278310       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:16:16.278410       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:16:16.293265       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:16:16.293750       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:16:16.293812       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:16:16.298528       1 config.go:200] "Starting service config controller"
	I1227 20:16:16.298607       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:16:16.298663       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:16:16.298690       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:16:16.302047       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:16:16.303313       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:16:16.304201       1 config.go:309] "Starting node config controller"
	I1227 20:16:16.304276       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:16:16.304307       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:16:16.399041       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:16:16.402314       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 20:16:16.412735       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [acdd287d4087fec2c7c00eb589c13b06231128c1441e2db4a8f74c57600a6e67] <==
	E1227 20:17:46.524702       1 framework.go:1544] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5dh85\": pod kube-proxy-5dh85 is already assigned to node \"ha-422549-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5dh85" node="ha-422549-m05"
	E1227 20:17:46.524974       1 schedule_one.go:370] "scheduler cache ForgetPod failed" err="pod 9f3f6c7d-38b1-4845-bb80-86214ed404f5(kube-system/kube-proxy-5dh85) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-5dh85"
	E1227 20:17:46.525768       1 schedule_one.go:1068] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-n8tp2\": pod kindnet-n8tp2 is already assigned to node \"ha-422549-m05\"" logger="UnhandledError" pod="kube-system/kindnet-n8tp2"
	I1227 20:17:46.525898       1 schedule_one.go:1081] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-jq272" node="ha-422549-m05"
	I1227 20:17:46.526049       1 schedule_one.go:1081] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-n8tp2" node="ha-422549-m05"
	E1227 20:17:46.525974       1 schedule_one.go:1068] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5dh85\": pod kube-proxy-5dh85 is already assigned to node \"ha-422549-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-5dh85"
	I1227 20:17:46.532106       1 schedule_one.go:1081] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5dh85" node="ha-422549-m05"
	E1227 20:17:46.586195       1 framework.go:1544] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zkvfs\": pod kindnet-zkvfs is already assigned to node \"ha-422549-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-zkvfs" node="ha-422549-m05"
	E1227 20:17:46.586352       1 schedule_one.go:1068] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zkvfs\": pod kindnet-zkvfs is already assigned to node \"ha-422549-m05\"" logger="UnhandledError" pod="kube-system/kindnet-zkvfs"
	E1227 20:17:46.586524       1 framework.go:1544] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7bg75\": pod kube-proxy-7bg75 is already assigned to node \"ha-422549-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7bg75" node="ha-422549-m05"
	E1227 20:17:46.588267       1 schedule_one.go:370] "scheduler cache ForgetPod failed" err="pod 26aa7532-a8d9-4383-b3c1-de0f94f67bbb(kube-system/kube-proxy-7bg75) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-7bg75"
	E1227 20:17:46.588363       1 schedule_one.go:1068] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7bg75\": pod kube-proxy-7bg75 is already assigned to node \"ha-422549-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-7bg75"
	E1227 20:17:46.588587       1 framework.go:1544] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vqzqr\": pod kindnet-vqzqr is already assigned to node \"ha-422549-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-vqzqr" node="ha-422549-m05"
	E1227 20:17:46.588647       1 schedule_one.go:370] "scheduler cache ForgetPod failed" err="pod d0346179-7b1e-48ee-b3fc-4192653b696b(kube-system/kindnet-vqzqr) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-vqzqr"
	E1227 20:17:46.591374       1 schedule_one.go:1068] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vqzqr\": pod kindnet-vqzqr is already assigned to node \"ha-422549-m05\"" logger="UnhandledError" pod="kube-system/kindnet-vqzqr"
	I1227 20:17:46.591476       1 schedule_one.go:1081] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-vqzqr" node="ha-422549-m05"
	I1227 20:17:46.591715       1 schedule_one.go:1081] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-7bg75" node="ha-422549-m05"
	E1227 20:17:47.054055       1 framework.go:1544] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-p6z5l\": pod kube-proxy-p6z5l is already assigned to node \"ha-422549-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-p6z5l" node="ha-422549-m05"
	E1227 20:17:47.081150       1 schedule_one.go:370] "scheduler cache ForgetPod failed" err="pod c136964d-e5da-458b-8f5e-451b33988bab(kube-system/kube-proxy-p6z5l) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-p6z5l"
	E1227 20:17:47.081267       1 schedule_one.go:1068] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-p6z5l\": pod kube-proxy-p6z5l is already assigned to node \"ha-422549-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-p6z5l"
	E1227 20:17:47.054341       1 framework.go:1544] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8jzbd\": pod kindnet-8jzbd is already assigned to node \"ha-422549-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-8jzbd" node="ha-422549-m05"
	E1227 20:17:47.081381       1 schedule_one.go:370] "scheduler cache ForgetPod failed" err="pod 7b14ad85-d98b-47dc-bcfc-96d2202ac94e(kube-system/kindnet-8jzbd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-8jzbd"
	E1227 20:17:47.082504       1 schedule_one.go:1068] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8jzbd\": pod kindnet-8jzbd is already assigned to node \"ha-422549-m05\"" logger="UnhandledError" pod="kube-system/kindnet-8jzbd"
	I1227 20:17:47.082655       1 schedule_one.go:1081] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8jzbd" node="ha-422549-m05"
	I1227 20:17:47.082609       1 schedule_one.go:1081] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-p6z5l" node="ha-422549-m05"
	
	
	==> kubelet <==
	Dec 27 20:16:15 ha-422549 kubelet[804]: E1227 20:16:15.322797     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:16:15 ha-422549 kubelet[804]: E1227 20:16:15.333130     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:16:15 ha-422549 kubelet[804]: E1227 20:16:15.350577     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:16:15 ha-422549 kubelet[804]: I1227 20:16:15.550682     804 kubelet_node_status.go:74] "Attempting to register node" node="ha-422549"
	Dec 27 20:16:15 ha-422549 kubelet[804]: I1227 20:16:15.614938     804 kubelet_node_status.go:123] "Node was previously registered" node="ha-422549"
	Dec 27 20:16:15 ha-422549 kubelet[804]: I1227 20:16:15.615228     804 kubelet_node_status.go:77] "Successfully registered node" node="ha-422549"
	Dec 27 20:16:15 ha-422549 kubelet[804]: I1227 20:16:15.615315     804 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 27 20:16:15 ha-422549 kubelet[804]: I1227 20:16:15.616294     804 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 27 20:16:16 ha-422549 kubelet[804]: E1227 20:16:16.196898     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-ha-422549" containerName="kube-scheduler"
	Dec 27 20:16:16 ha-422549 kubelet[804]: E1227 20:16:16.353325     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:16:16 ha-422549 kubelet[804]: E1227 20:16:16.354607     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:16:20 ha-422549 kubelet[804]: E1227 20:16:20.687129     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:16:21 ha-422549 kubelet[804]: E1227 20:16:21.706076     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	Dec 27 20:16:22 ha-422549 kubelet[804]: E1227 20:16:22.368737     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	Dec 27 20:16:30 ha-422549 kubelet[804]: E1227 20:16:30.696140     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:16:45 ha-422549 kubelet[804]: I1227 20:16:45.426555     804 scope.go:122] "RemoveContainer" containerID="7acd50dc5298fb99db44502b466c9e34b79ddce5613479143c4c5834f09f1731"
	Dec 27 20:16:56 ha-422549 kubelet[804]: E1227 20:16:56.356173     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:16:56 ha-422549 kubelet[804]: E1227 20:16:56.356735     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:17:26 ha-422549 kubelet[804]: E1227 20:17:26.167299     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	Dec 27 20:17:32 ha-422549 kubelet[804]: E1227 20:17:32.167238     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-ha-422549" containerName="etcd"
	Dec 27 20:17:44 ha-422549 kubelet[804]: E1227 20:17:44.166617     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:17:45 ha-422549 kubelet[804]: E1227 20:17:45.167196     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-ha-422549" containerName="kube-scheduler"
	Dec 27 20:18:14 ha-422549 kubelet[804]: E1227 20:18:14.167080     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:18:21 ha-422549 kubelet[804]: E1227 20:18:21.167434     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:18:33 ha-422549 kubelet[804]: E1227 20:18:33.167469     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-ha-422549" containerName="etcd"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-422549 -n ha-422549
helpers_test.go:270: (dbg) Run:  kubectl --context ha-422549 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (85.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.35890487s)
ha_test.go:305: expected profile "ha-422549" in json of 'profile list' to include 4 nodes but have 5 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-422549\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-422549\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfssh
ares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.35.0\",\"ClusterName\":\"ha-422549\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"I
P\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.168.49.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong
\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountM
Size\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000,\"Rosetta\":false},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-422549
helpers_test.go:244: (dbg) docker inspect ha-422549:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf",
	        "Created": "2025-12-27T20:03:01.682141141Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 337233,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:15:42.462104956Z",
	            "FinishedAt": "2025-12-27T20:15:41.57505881Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/hostname",
	        "HostsPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/hosts",
	        "LogPath": "/var/lib/docker/containers/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf-json.log",
	        "Name": "/ha-422549",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-422549:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-422549",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf",
	                "LowerDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/merged",
	                "UpperDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/diff",
	                "WorkDir": "/var/lib/docker/overlay2/77c04e288a2174c2928103c138c12355972681725ac27c1ea9c8426f7ed51064/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-422549",
	                "Source": "/var/lib/docker/volumes/ha-422549/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-422549",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-422549",
	                "name.minikube.sigs.k8s.io": "ha-422549",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bb71ec3c47b900c0fa3f8d54314b359c784cf244167438faa167df26866a5f2b",
	            "SandboxKey": "/var/run/docker/netns/bb71ec3c47b9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-422549": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:de:7f:b9:2b:dc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9521cb9225c5842f69a8435c5cf5485b75f9a8b2c68158742ff27c2be32f5951",
	                    "EndpointID": "8d5c856b7af95de0f10e89f9cba406f7c7feb68311acbe9cee0239ed57d8152d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-422549",
	                        "53fd780c3df5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-422549 -n ha-422549
helpers_test.go:253: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-422549 logs -n 25: (1.650092628s)
helpers_test.go:261: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-422549 ssh -n ha-422549-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test_ha-422549-m03_ha-422549-m04.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp testdata/cp-test.txt ha-422549-m04:/home/docker/cp-test.txt                                                             │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3848759327/001/cp-test_ha-422549-m04.txt │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549:/home/docker/cp-test_ha-422549-m04_ha-422549.txt                       │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549 sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549.txt                                                 │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549-m02:/home/docker/cp-test_ha-422549-m04_ha-422549-m02.txt               │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m02 sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549-m02.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ cp      │ ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549-m03:/home/docker/cp-test_ha-422549-m04_ha-422549-m03.txt               │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ ssh     │ ha-422549 ssh -n ha-422549-m03 sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549-m03.txt                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ node    │ ha-422549 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ node    │ ha-422549 node start m02 --alsologtostderr -v 5                                                                                      │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:06 UTC │
	│ node    │ ha-422549 node list --alsologtostderr -v 5                                                                                           │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │                     │
	│ stop    │ ha-422549 stop --alsologtostderr -v 5                                                                                                │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:06 UTC │ 27 Dec 25 20:07 UTC │
	│ start   │ ha-422549 start --wait true --alsologtostderr -v 5                                                                                   │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:07 UTC │                     │
	│ node    │ ha-422549 node list --alsologtostderr -v 5                                                                                           │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:15 UTC │                     │
	│ node    │ ha-422549 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:15 UTC │                     │
	│ stop    │ ha-422549 stop --alsologtostderr -v 5                                                                                                │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:15 UTC │ 27 Dec 25 20:15 UTC │
	│ start   │ ha-422549 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:15 UTC │ 27 Dec 25 20:17 UTC │
	│ node    │ ha-422549 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-422549 │ jenkins │ v1.37.0 │ 27 Dec 25 20:17 UTC │ 27 Dec 25 20:18 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:15:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:15:42.161076  337106 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:15:42.161339  337106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:15:42.161371  337106 out.go:374] Setting ErrFile to fd 2...
	I1227 20:15:42.161395  337106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:15:42.161910  337106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:15:42.162549  337106 out.go:368] Setting JSON to false
	I1227 20:15:42.163583  337106 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7095,"bootTime":1766859448,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:15:42.163745  337106 start.go:143] virtualization:  
	I1227 20:15:42.167252  337106 out.go:179] * [ha-422549] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:15:42.171750  337106 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:15:42.172029  337106 notify.go:221] Checking for updates...
	I1227 20:15:42.178183  337106 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:15:42.181404  337106 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:15:42.184507  337106 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:15:42.187835  337106 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:15:42.191251  337106 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:15:42.194951  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:42.195780  337106 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:15:42.234793  337106 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:15:42.234922  337106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:15:42.302450  337106 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 20:15:42.291742685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:15:42.302570  337106 docker.go:319] overlay module found
	I1227 20:15:42.305766  337106 out.go:179] * Using the docker driver based on existing profile
	I1227 20:15:42.308585  337106 start.go:309] selected driver: docker
	I1227 20:15:42.308605  337106 start.go:928] validating driver "docker" against &{Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:15:42.308760  337106 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:15:42.308874  337106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:15:42.372262  337106 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 20:15:42.36286995 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:15:42.372694  337106 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:15:42.372727  337106 cni.go:84] Creating CNI manager for ""
	I1227 20:15:42.372789  337106 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1227 20:15:42.372841  337106 start.go:353] cluster config:
	{Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:15:42.376040  337106 out.go:179] * Starting "ha-422549" primary control-plane node in "ha-422549" cluster
	I1227 20:15:42.378965  337106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:15:42.382020  337106 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:15:42.384910  337106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:15:42.384967  337106 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:15:42.385060  337106 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:15:42.385090  337106 cache.go:65] Caching tarball of preloaded images
	I1227 20:15:42.385178  337106 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:15:42.385188  337106 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:15:42.385327  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:15:42.406731  337106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:15:42.406754  337106 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:15:42.406775  337106 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:15:42.406807  337106 start.go:360] acquireMachinesLock for ha-422549: {Name:mk939e8ee4c2bedc86cc6a99d76298e7b2a26ce2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:15:42.406878  337106 start.go:364] duration metric: took 49.87µs to acquireMachinesLock for "ha-422549"
	I1227 20:15:42.406911  337106 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:15:42.406918  337106 fix.go:54] fixHost starting: 
	I1227 20:15:42.407176  337106 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:15:42.424618  337106 fix.go:112] recreateIfNeeded on ha-422549: state=Stopped err=<nil>
	W1227 20:15:42.424651  337106 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:15:42.429793  337106 out.go:252] * Restarting existing docker container for "ha-422549" ...
	I1227 20:15:42.429887  337106 cli_runner.go:164] Run: docker start ha-422549
	I1227 20:15:42.679169  337106 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:15:42.705015  337106 kic.go:430] container "ha-422549" state is running.
	I1227 20:15:42.705398  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:15:42.726555  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:15:42.726800  337106 machine.go:94] provisionDockerMachine start ...
	I1227 20:15:42.726868  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:42.751689  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:42.752020  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1227 20:15:42.752029  337106 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:15:42.752567  337106 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60238->127.0.0.1:33183: read: connection reset by peer
	I1227 20:15:45.888954  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549
	
	I1227 20:15:45.888987  337106 ubuntu.go:182] provisioning hostname "ha-422549"
	I1227 20:15:45.889052  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:45.906473  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:45.906784  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1227 20:15:45.906800  337106 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549 && echo "ha-422549" | sudo tee /etc/hostname
	I1227 20:15:46.050632  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549
	
	I1227 20:15:46.050726  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:46.069043  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:46.069357  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1227 20:15:46.069378  337106 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:15:46.210430  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:15:46.210454  337106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:15:46.210475  337106 ubuntu.go:190] setting up certificates
	I1227 20:15:46.210485  337106 provision.go:84] configureAuth start
	I1227 20:15:46.210557  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:15:46.227543  337106 provision.go:143] copyHostCerts
	I1227 20:15:46.227593  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:15:46.227625  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:15:46.227646  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:15:46.227726  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:15:46.227825  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:15:46.227847  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:15:46.227858  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:15:46.227890  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:15:46.227942  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:15:46.227963  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:15:46.227975  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:15:46.228004  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:15:46.228059  337106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549 san=[127.0.0.1 192.168.49.2 ha-422549 localhost minikube]
	I1227 20:15:46.477651  337106 provision.go:177] copyRemoteCerts
	I1227 20:15:46.477745  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:15:46.477812  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:46.494398  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:46.592817  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:15:46.592877  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1227 20:15:46.609148  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:15:46.609214  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:15:46.626129  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:15:46.626186  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:15:46.643096  337106 provision.go:87] duration metric: took 432.58782ms to configureAuth
	I1227 20:15:46.643124  337106 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:15:46.643376  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:46.643487  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:46.660667  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:46.661005  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1227 20:15:46.661026  337106 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:15:47.007057  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:15:47.007122  337106 machine.go:97] duration metric: took 4.280312247s to provisionDockerMachine
	I1227 20:15:47.007150  337106 start.go:293] postStartSetup for "ha-422549" (driver="docker")
	I1227 20:15:47.007178  337106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:15:47.007279  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:15:47.007348  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:47.029053  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:47.129052  337106 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:15:47.132168  337106 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:15:47.132192  337106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:15:47.132203  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:15:47.132254  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:15:47.132333  337106 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:15:47.132339  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:15:47.132433  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:15:47.139569  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:15:47.156024  337106 start.go:296] duration metric: took 148.843658ms for postStartSetup
	I1227 20:15:47.156149  337106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:15:47.156211  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:47.173109  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:47.266513  337106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:15:47.270816  337106 fix.go:56] duration metric: took 4.86389233s for fixHost
	I1227 20:15:47.270844  337106 start.go:83] releasing machines lock for "ha-422549", held for 4.863953055s
	I1227 20:15:47.270913  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:15:47.287367  337106 ssh_runner.go:195] Run: cat /version.json
	I1227 20:15:47.287429  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:47.287703  337106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:15:47.287764  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:47.309269  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:47.309529  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:47.405178  337106 ssh_runner.go:195] Run: systemctl --version
	I1227 20:15:47.511199  337106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:15:47.547392  337106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:15:47.551737  337106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:15:47.551827  337106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:15:47.559324  337106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:15:47.559347  337106 start.go:496] detecting cgroup driver to use...
	I1227 20:15:47.559388  337106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:15:47.559434  337106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:15:47.574366  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:15:47.587100  337106 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:15:47.587164  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:15:47.602600  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:15:47.615779  337106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:15:47.738070  337106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:15:47.863690  337106 docker.go:234] disabling docker service ...
	I1227 20:15:47.863793  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:15:47.878841  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:15:47.891780  337106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:15:48.005581  337106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:15:48.146501  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:15:48.159335  337106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:15:48.172971  337106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:15:48.173057  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.182022  337106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:15:48.182123  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.190766  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.199691  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.208613  337106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:15:48.216583  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.225357  337106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.238325  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:48.247144  337106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:15:48.254972  337106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:15:48.262335  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:15:48.380620  337106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:15:48.551875  337106 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:15:48.551947  337106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:15:48.555685  337106 start.go:574] Will wait 60s for crictl version
	I1227 20:15:48.555757  337106 ssh_runner.go:195] Run: which crictl
	I1227 20:15:48.559221  337106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:15:48.585662  337106 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:15:48.585789  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:15:48.613651  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:15:48.644252  337106 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:15:48.647214  337106 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:15:48.663170  337106 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:15:48.666927  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:15:48.676701  337106 kubeadm.go:884] updating cluster {Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:15:48.676861  337106 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:15:48.676926  337106 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:15:48.713302  337106 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:15:48.713323  337106 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:15:48.713375  337106 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:15:48.738578  337106 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:15:48.738606  337106 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:15:48.738615  337106 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I1227 20:15:48.738716  337106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:15:48.738798  337106 ssh_runner.go:195] Run: crio config
	I1227 20:15:48.806339  337106 cni.go:84] Creating CNI manager for ""
	I1227 20:15:48.806361  337106 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1227 20:15:48.806383  337106 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:15:48.806406  337106 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-422549 NodeName:ha-422549 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:15:48.806540  337106 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-422549"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:15:48.806566  337106 kube-vip.go:115] generating kube-vip config ...
	I1227 20:15:48.806619  337106 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 20:15:48.818243  337106 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:15:48.818375  337106 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 20:15:48.818447  337106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:15:48.825705  337106 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:15:48.825785  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1227 20:15:48.832852  337106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1227 20:15:48.844713  337106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:15:48.856701  337106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1227 20:15:48.868844  337106 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 20:15:48.880915  337106 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:15:48.884598  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:15:48.893875  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:15:49.019776  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:15:49.036215  337106 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.2
	I1227 20:15:49.036242  337106 certs.go:195] generating shared ca certs ...
	I1227 20:15:49.036258  337106 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:15:49.036390  337106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:15:49.036447  337106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:15:49.036460  337106 certs.go:257] generating profile certs ...
	I1227 20:15:49.036541  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key
	I1227 20:15:49.036611  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.743f7ef3
	I1227 20:15:49.036653  337106 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key
	I1227 20:15:49.036666  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:15:49.036679  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:15:49.036694  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:15:49.036704  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:15:49.036720  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:15:49.036731  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:15:49.036746  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:15:49.036756  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:15:49.036804  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:15:49.036836  337106 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:15:49.036848  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:15:49.036874  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:15:49.036910  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:15:49.036939  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:15:49.037002  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:15:49.037036  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:15:49.037057  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:49.037072  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:15:49.037704  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:15:49.057400  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:15:49.076605  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:15:49.095621  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:15:49.115441  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:15:49.135019  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:15:49.162312  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:15:49.179956  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:15:49.203774  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:15:49.228107  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:15:49.246930  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:15:49.265916  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:15:49.281838  337106 ssh_runner.go:195] Run: openssl version
	I1227 20:15:49.287989  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:49.295912  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:15:49.303435  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:49.307018  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:49.307115  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:49.347922  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:15:49.354929  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:15:49.361715  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:15:49.368688  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:15:49.372719  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:15:49.372798  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:15:49.413917  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:15:49.421060  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:15:49.428016  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:15:49.435273  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:15:49.438964  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:15:49.439075  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:15:49.480693  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:15:49.488361  337106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:15:49.492062  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:15:49.532621  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:15:49.573227  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:15:49.615004  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:15:49.660835  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:15:49.706320  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:15:49.793965  337106 kubeadm.go:401] StartCluster: {Name:ha-422549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:15:49.794119  337106 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:15:49.794193  337106 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:15:49.873661  337106 cri.go:96] found id: "acdd287d4087fec2c7c00eb589c13b06231128c1441e2db4a8f74c57600a6e67"
	I1227 20:15:49.873685  337106 cri.go:96] found id: "7c4ac1dbe59ad7d3143dfe74886a6bc3058bfad37ae864b855a6e47c1a4d984e"
	I1227 20:15:49.873690  337106 cri.go:96] found id: "6b0b91d1da0a4c385d0d3110ebc1d18efbc54bab7d6da6bba31c072f2fbd4da9"
	I1227 20:15:49.873694  337106 cri.go:96] found id: "776b31832bd3b44eb905f188f6aa9c0428a287ba7eaeb4ed172dd8bef1b5795b"
	I1227 20:15:49.873697  337106 cri.go:96] found id: "97ce57129ce3bc803fd62d49e1f3f06d06aa64d93e2ef36f372084cbbd21e34a"
	I1227 20:15:49.873717  337106 cri.go:96] found id: ""
	I1227 20:15:49.873771  337106 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:15:49.891661  337106 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:15:49Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:15:49.891749  337106 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:15:49.906600  337106 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:15:49.906624  337106 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:15:49.906703  337106 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:15:49.919028  337106 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:15:49.919479  337106 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-422549" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:15:49.919620  337106 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-272475/kubeconfig needs updating (will repair): [kubeconfig missing "ha-422549" cluster setting kubeconfig missing "ha-422549" context setting]
	I1227 20:15:49.919957  337106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:15:49.920555  337106 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 20:15:49.921302  337106 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 20:15:49.921327  337106 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 20:15:49.921333  337106 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 20:15:49.921364  337106 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1227 20:15:49.921405  337106 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 20:15:49.921411  337106 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 20:15:49.921423  337106 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 20:15:49.921745  337106 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:15:49.936013  337106 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1227 20:15:49.936040  337106 kubeadm.go:602] duration metric: took 29.409884ms to restartPrimaryControlPlane
	I1227 20:15:49.936051  337106 kubeadm.go:403] duration metric: took 142.110676ms to StartCluster
	I1227 20:15:49.936075  337106 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:15:49.936142  337106 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:15:49.937228  337106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:15:49.937930  337106 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:15:49.938100  337106 start.go:242] waiting for startup goroutines ...
	I1227 20:15:49.938130  337106 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:15:49.939423  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:49.942218  337106 out.go:179] * Enabled addons: 
	I1227 20:15:49.945329  337106 addons.go:530] duration metric: took 7.202537ms for enable addons: enabled=[]
	I1227 20:15:49.945417  337106 start.go:247] waiting for cluster config update ...
	I1227 20:15:49.945442  337106 start.go:256] writing updated cluster config ...
	I1227 20:15:49.948818  337106 out.go:203] 
	I1227 20:15:49.952226  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:49.952424  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:15:49.955848  337106 out.go:179] * Starting "ha-422549-m02" control-plane node in "ha-422549" cluster
	I1227 20:15:49.958975  337106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:15:49.962204  337106 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:15:49.965179  337106 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:15:49.965273  337106 cache.go:65] Caching tarball of preloaded images
	I1227 20:15:49.965249  337106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:15:49.965709  337106 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:15:49.965749  337106 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:15:49.965939  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:15:49.990566  337106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:15:49.990585  337106 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:15:49.990599  337106 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:15:49.990629  337106 start.go:360] acquireMachinesLock for ha-422549-m02: {Name:mk8fc7aa5d6c41749cc4b9db094e3fd243d8b868 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:15:49.990677  337106 start.go:364] duration metric: took 33.255µs to acquireMachinesLock for "ha-422549-m02"
	I1227 20:15:49.990697  337106 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:15:49.990704  337106 fix.go:54] fixHost starting: m02
	I1227 20:15:49.990960  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m02 --format={{.State.Status}}
	I1227 20:15:50.012661  337106 fix.go:112] recreateIfNeeded on ha-422549-m02: state=Stopped err=<nil>
	W1227 20:15:50.012689  337106 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:15:50.016334  337106 out.go:252] * Restarting existing docker container for "ha-422549-m02" ...
	I1227 20:15:50.016437  337106 cli_runner.go:164] Run: docker start ha-422549-m02
	I1227 20:15:50.398628  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m02 --format={{.State.Status}}
	I1227 20:15:50.427580  337106 kic.go:430] container "ha-422549-m02" state is running.
	I1227 20:15:50.427943  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:15:50.459424  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:15:50.459657  337106 machine.go:94] provisionDockerMachine start ...
	I1227 20:15:50.459714  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:50.490531  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:50.493631  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1227 20:15:50.493650  337106 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:15:50.494339  337106 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 20:15:53.641274  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m02
	
	I1227 20:15:53.641349  337106 ubuntu.go:182] provisioning hostname "ha-422549-m02"
	I1227 20:15:53.641467  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:53.663080  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:53.663387  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1227 20:15:53.663406  337106 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549-m02 && echo "ha-422549-m02" | sudo tee /etc/hostname
	I1227 20:15:53.819054  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m02
	
	I1227 20:15:53.819139  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:53.847197  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:53.847500  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1227 20:15:53.847516  337106 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:15:53.989824  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:15:53.989849  337106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:15:53.989866  337106 ubuntu.go:190] setting up certificates
	I1227 20:15:53.989878  337106 provision.go:84] configureAuth start
	I1227 20:15:53.989941  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:15:54.009870  337106 provision.go:143] copyHostCerts
	I1227 20:15:54.009915  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:15:54.009950  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:15:54.009964  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:15:54.010041  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:15:54.010125  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:15:54.010148  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:15:54.010153  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:15:54.010182  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:15:54.010267  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:15:54.010289  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:15:54.010297  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:15:54.010323  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:15:54.010374  337106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549-m02 san=[127.0.0.1 192.168.49.3 ha-422549-m02 localhost minikube]
	I1227 20:15:54.260286  337106 provision.go:177] copyRemoteCerts
	I1227 20:15:54.260405  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:15:54.260467  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:54.278663  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:15:54.377066  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:15:54.377172  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:15:54.395067  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:15:54.395180  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 20:15:54.412398  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:15:54.412507  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:15:54.429091  337106 provision.go:87] duration metric: took 439.199295ms to configureAuth
	I1227 20:15:54.429119  337106 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:15:54.429346  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:54.429480  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:54.446402  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:15:54.446712  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1227 20:15:54.446736  337106 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:15:54.817328  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:15:54.817351  337106 machine.go:97] duration metric: took 4.357685623s to provisionDockerMachine
	I1227 20:15:54.817363  337106 start.go:293] postStartSetup for "ha-422549-m02" (driver="docker")
	I1227 20:15:54.817373  337106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:15:54.817438  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:15:54.817558  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:54.834291  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:15:54.933155  337106 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:15:54.936441  337106 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:15:54.936469  337106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:15:54.936480  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:15:54.936536  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:15:54.936618  337106 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:15:54.936632  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:15:54.936739  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:15:54.944112  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:15:54.961353  337106 start.go:296] duration metric: took 143.973459ms for postStartSetup
	I1227 20:15:54.961439  337106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:15:54.961529  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:54.978679  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:15:55.075001  337106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:15:55.080166  337106 fix.go:56] duration metric: took 5.089454661s for fixHost
	I1227 20:15:55.080193  337106 start.go:83] releasing machines lock for "ha-422549-m02", held for 5.089507139s
	I1227 20:15:55.080267  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m02
	I1227 20:15:55.100982  337106 out.go:179] * Found network options:
	I1227 20:15:55.103953  337106 out.go:179]   - NO_PROXY=192.168.49.2
	W1227 20:15:55.106802  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:15:55.106845  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 20:15:55.106919  337106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:15:55.106964  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:55.107011  337106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:15:55.107066  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m02
	I1227 20:15:55.130151  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:15:55.137687  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m02/id_rsa Username:docker}
	I1227 20:15:55.324223  337106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:15:55.328436  337106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:15:55.328502  337106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:15:55.336088  337106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:15:55.336120  337106 start.go:496] detecting cgroup driver to use...
	I1227 20:15:55.336165  337106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:15:55.336216  337106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:15:55.350639  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:15:55.363702  337106 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:15:55.363812  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:15:55.380023  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:15:55.396017  337106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:15:55.627299  337106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:15:55.867067  337106 docker.go:234] disabling docker service ...
	I1227 20:15:55.867179  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:15:55.887006  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:15:55.903434  337106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:15:56.147368  337106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:15:56.372701  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:15:56.386071  337106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:15:56.438830  337106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:15:56.438945  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.453154  337106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:15:56.453272  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.469839  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.480255  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.492229  337106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:15:56.504717  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.522023  337106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.536543  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:15:56.549900  337106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:15:56.562631  337106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:15:56.570307  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:15:56.790142  337106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:15:57.038862  337106 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:15:57.038970  337106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:15:57.042575  337106 start.go:574] Will wait 60s for crictl version
	I1227 20:15:57.042675  337106 ssh_runner.go:195] Run: which crictl
	I1227 20:15:57.046123  337106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:15:57.079472  337106 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:15:57.079604  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:15:57.111539  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:15:57.144245  337106 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:15:57.147176  337106 out.go:179]   - env NO_PROXY=192.168.49.2
	I1227 20:15:57.150339  337106 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:15:57.166874  337106 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:15:57.170704  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:15:57.180393  337106 mustload.go:66] Loading cluster: ha-422549
	I1227 20:15:57.180638  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:57.180911  337106 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:15:57.198058  337106 host.go:66] Checking if "ha-422549" exists ...
	I1227 20:15:57.198339  337106 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.3
	I1227 20:15:57.198353  337106 certs.go:195] generating shared ca certs ...
	I1227 20:15:57.198367  337106 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:15:57.198490  337106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:15:57.198538  337106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:15:57.198549  337106 certs.go:257] generating profile certs ...
	I1227 20:15:57.198625  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key
	I1227 20:15:57.198688  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.982843aa
	I1227 20:15:57.198735  337106 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key
	I1227 20:15:57.198748  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:15:57.198762  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:15:57.198779  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:15:57.198791  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:15:57.198810  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:15:57.198822  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:15:57.198837  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:15:57.198847  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:15:57.198901  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:15:57.198935  337106 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:15:57.198948  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:15:57.198974  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:15:57.199001  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:15:57.199031  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:15:57.199079  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:15:57.199116  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:15:57.199131  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:15:57.199146  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:57.199227  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:15:57.217178  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:15:57.309803  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1227 20:15:57.313760  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1227 20:15:57.321367  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1227 20:15:57.324564  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1227 20:15:57.332196  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1227 20:15:57.335588  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1227 20:15:57.343125  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1227 20:15:57.346654  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1227 20:15:57.354254  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1227 20:15:57.357588  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1227 20:15:57.365565  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1227 20:15:57.369083  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1227 20:15:57.377616  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:15:57.394501  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:15:57.411297  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:15:57.428988  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:15:57.454933  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:15:57.477949  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:15:57.503718  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:15:57.527644  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:15:57.546021  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:15:57.562799  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:15:57.579794  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:15:57.596739  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1227 20:15:57.608968  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1227 20:15:57.621234  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1227 20:15:57.633283  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1227 20:15:57.645247  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1227 20:15:57.656994  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1227 20:15:57.668811  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (728 bytes)
	I1227 20:15:57.680824  337106 ssh_runner.go:195] Run: openssl version
	I1227 20:15:57.687264  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:15:57.694487  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:15:57.701580  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:15:57.705288  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:15:57.705345  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:15:57.746792  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:15:57.754009  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:15:57.760822  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:15:57.767703  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:15:57.771201  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:15:57.771305  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:15:57.813599  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:15:57.821036  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:57.828245  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:15:57.835688  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:57.839528  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:57.839640  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:15:57.880298  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:15:57.887708  337106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:15:57.891264  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:15:57.931649  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:15:57.972880  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:15:58.015739  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:15:58.057920  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:15:58.099308  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:15:58.140147  337106 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.35.0 crio true true} ...
	I1227 20:15:58.140265  337106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:15:58.140313  337106 kube-vip.go:115] generating kube-vip config ...
	I1227 20:15:58.140373  337106 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 20:15:58.151945  337106 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:15:58.152003  337106 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 20:15:58.152075  337106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:15:58.159193  337106 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:15:58.159305  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1227 20:15:58.166464  337106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 20:15:58.178769  337106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:15:58.190381  337106 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 20:15:58.202642  337106 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:15:58.206198  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:15:58.215567  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:15:58.331455  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:15:58.345573  337106 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:15:58.345907  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:15:58.350455  337106 out.go:179] * Verifying Kubernetes components...
	I1227 20:15:58.353287  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:15:58.476026  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:15:58.491956  337106 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1227 20:15:58.492036  337106 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1227 20:15:58.492360  337106 node_ready.go:35] waiting up to 6m0s for node "ha-422549-m02" to be "Ready" ...
	W1227 20:16:08.493659  337106 node_ready.go:55] error getting node "ha-422549-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-422549-m02": net/http: TLS handshake timeout
	W1227 20:16:13.724508  337106 node_ready.go:57] node "ha-422549-m02" has "Ready":"Unknown" status (will retry)
	I1227 20:16:13.998074  337106 node_ready.go:49] node "ha-422549-m02" is "Ready"
	I1227 20:16:13.998104  337106 node_ready.go:38] duration metric: took 15.505718327s for node "ha-422549-m02" to be "Ready" ...
	I1227 20:16:13.998117  337106 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:16:13.998195  337106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:16:14.018969  337106 api_server.go:72] duration metric: took 15.673348785s to wait for apiserver process to appear ...
	I1227 20:16:14.019000  337106 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:16:14.019022  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:14.028770  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:16:14.028803  337106 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:16:14.519178  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:14.550966  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:16:14.551052  337106 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:16:15.019197  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:15.046385  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:16:15.046479  337106 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:16:15.519851  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:15.557956  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:16:15.558047  337106 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:16:16.019247  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:16.033187  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:16:16.033267  337106 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:16:16.519670  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:16.536800  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1227 20:16:16.539603  337106 api_server.go:141] control plane version: v1.35.0
	I1227 20:16:16.539669  337106 api_server.go:131] duration metric: took 2.52066052s to wait for apiserver health ...
	I1227 20:16:16.539693  337106 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:16:16.570231  337106 system_pods.go:59] 26 kube-system pods found
	I1227 20:16:16.570324  337106 system_pods.go:61] "coredns-7d764666f9-mf5xw" [5a7f58c2-f991-46f0-9ece-9a561d53d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:16.570350  337106 system_pods.go:61] "coredns-7d764666f9-n5d9d" [159febfd-c1e4-4897-a372-59e4a3069914] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:16.570386  337106 system_pods.go:61] "etcd-ha-422549" [8f26f563-e734-4add-aefe-484f0e873a1e] Running
	I1227 20:16:16.570414  337106 system_pods.go:61] "etcd-ha-422549-m02" [5fed7e48-07c4-4a07-b63b-0fccbd196f6f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:16:16.570435  337106 system_pods.go:61] "etcd-ha-422549-m03" [d22f78a1-2f4c-41e6-b65a-bf7108686c71] Running
	I1227 20:16:16.570460  337106 system_pods.go:61] "kindnet-28svl" [1494f795-941f-418e-8090-098225eb9c6a] Running
	I1227 20:16:16.570493  337106 system_pods.go:61] "kindnet-4hl7v" [ea2cc8a1-df16-440c-a093-a5d915b249b4] Running
	I1227 20:16:16.570521  337106 system_pods.go:61] "kindnet-5wczs" [df3d7298-4140-464f-a6e8-c614e1683488] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:16:16.570663  337106 system_pods.go:61] "kindnet-qkqmv" [66d834ae-af1b-456d-ae48-8a0d6608f961] Running
	I1227 20:16:16.570696  337106 system_pods.go:61] "kube-apiserver-ha-422549" [14f8e794-2ba7-477d-806b-03dd5a33d868] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:16:16.570721  337106 system_pods.go:61] "kube-apiserver-ha-422549-m02" [a4b97cc6-26ef-4d46-9ef9-bdee08eb89d6] Running
	I1227 20:16:16.570746  337106 system_pods.go:61] "kube-apiserver-ha-422549-m03" [71f23288-3e33-4bc8-9182-08c190ae026f] Running
	I1227 20:16:16.570787  337106 system_pods.go:61] "kube-controller-manager-ha-422549" [b69af60f-4eac-4e85-aa81-66b7616a46f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:16.570820  337106 system_pods.go:61] "kube-controller-manager-ha-422549-m02" [07c0e68f-76e5-4cee-92a2-05dd2fb4c3e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:16.570843  337106 system_pods.go:61] "kube-controller-manager-ha-422549-m03" [af291694-2986-455c-8588-c2879d10ff3b] Running
	I1227 20:16:16.570865  337106 system_pods.go:61] "kube-proxy-cg4z5" [42f74e61-eb67-4d02-8f08-f77f7163f5fc] Running
	I1227 20:16:16.570897  337106 system_pods.go:61] "kube-proxy-kscg6" [baa716d5-546a-4922-ba51-fe1116e36c75] Running
	I1227 20:16:16.570923  337106 system_pods.go:61] "kube-proxy-mhmmn" [d69029af-1fc4-4a31-913e-92e1231e845a] Running
	I1227 20:16:16.570948  337106 system_pods.go:61] "kube-proxy-nqr7h" [d0fc3ef5-765a-4376-94e6-42237908d3fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:16:16.570969  337106 system_pods.go:61] "kube-scheduler-ha-422549" [549e105d-d2e7-42b6-ae48-098d590e7b1d] Running
	I1227 20:16:16.571002  337106 system_pods.go:61] "kube-scheduler-ha-422549-m02" [db9187da-87a8-4b73-baea-76f3d9ef35c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:16:16.571026  337106 system_pods.go:61] "kube-scheduler-ha-422549-m03" [2a6b70b3-5303-404f-8b1d-1a65b9b81555] Running
	I1227 20:16:16.571044  337106 system_pods.go:61] "kube-vip-ha-422549" [32d647ce-90ed-4f56-b4c8-7ed445019d88] Running
	I1227 20:16:16.571067  337106 system_pods.go:61] "kube-vip-ha-422549-m02" [ddde9374-24b7-498d-b829-6902c612b272] Running
	I1227 20:16:16.571109  337106 system_pods.go:61] "kube-vip-ha-422549-m03" [39a60c56-1bf0-4232-9af0-f55e0c66a33d] Running
	I1227 20:16:16.571136  337106 system_pods.go:61] "storage-provisioner" [0d645eab-223f-4dd6-9518-6ab4a21d4c09] Running
	I1227 20:16:16.571156  337106 system_pods.go:74] duration metric: took 31.434553ms to wait for pod list to return data ...
	I1227 20:16:16.571179  337106 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:16:16.590199  337106 default_sa.go:45] found service account: "default"
	I1227 20:16:16.590265  337106 default_sa.go:55] duration metric: took 19.064027ms for default service account to be created ...
	I1227 20:16:16.590290  337106 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:16:16.623079  337106 system_pods.go:86] 26 kube-system pods found
	I1227 20:16:16.623169  337106 system_pods.go:89] "coredns-7d764666f9-mf5xw" [5a7f58c2-f991-46f0-9ece-9a561d53d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:16.623195  337106 system_pods.go:89] "coredns-7d764666f9-n5d9d" [159febfd-c1e4-4897-a372-59e4a3069914] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:16.623234  337106 system_pods.go:89] "etcd-ha-422549" [8f26f563-e734-4add-aefe-484f0e873a1e] Running
	I1227 20:16:16.623263  337106 system_pods.go:89] "etcd-ha-422549-m02" [5fed7e48-07c4-4a07-b63b-0fccbd196f6f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:16:16.623283  337106 system_pods.go:89] "etcd-ha-422549-m03" [d22f78a1-2f4c-41e6-b65a-bf7108686c71] Running
	I1227 20:16:16.623303  337106 system_pods.go:89] "kindnet-28svl" [1494f795-941f-418e-8090-098225eb9c6a] Running
	I1227 20:16:16.623335  337106 system_pods.go:89] "kindnet-4hl7v" [ea2cc8a1-df16-440c-a093-a5d915b249b4] Running
	I1227 20:16:16.623362  337106 system_pods.go:89] "kindnet-5wczs" [df3d7298-4140-464f-a6e8-c614e1683488] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:16:16.623385  337106 system_pods.go:89] "kindnet-qkqmv" [66d834ae-af1b-456d-ae48-8a0d6608f961] Running
	I1227 20:16:16.623411  337106 system_pods.go:89] "kube-apiserver-ha-422549" [14f8e794-2ba7-477d-806b-03dd5a33d868] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:16:16.623447  337106 system_pods.go:89] "kube-apiserver-ha-422549-m02" [a4b97cc6-26ef-4d46-9ef9-bdee08eb89d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:16:16.623475  337106 system_pods.go:89] "kube-apiserver-ha-422549-m03" [71f23288-3e33-4bc8-9182-08c190ae026f] Running
	I1227 20:16:16.623501  337106 system_pods.go:89] "kube-controller-manager-ha-422549" [b69af60f-4eac-4e85-aa81-66b7616a46f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:16.623525  337106 system_pods.go:89] "kube-controller-manager-ha-422549-m02" [07c0e68f-76e5-4cee-92a2-05dd2fb4c3e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:16.623557  337106 system_pods.go:89] "kube-controller-manager-ha-422549-m03" [af291694-2986-455c-8588-c2879d10ff3b] Running
	I1227 20:16:16.623583  337106 system_pods.go:89] "kube-proxy-cg4z5" [42f74e61-eb67-4d02-8f08-f77f7163f5fc] Running
	I1227 20:16:16.623607  337106 system_pods.go:89] "kube-proxy-kscg6" [baa716d5-546a-4922-ba51-fe1116e36c75] Running
	I1227 20:16:16.623632  337106 system_pods.go:89] "kube-proxy-mhmmn" [d69029af-1fc4-4a31-913e-92e1231e845a] Running
	I1227 20:16:16.623664  337106 system_pods.go:89] "kube-proxy-nqr7h" [d0fc3ef5-765a-4376-94e6-42237908d3fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:16:16.623690  337106 system_pods.go:89] "kube-scheduler-ha-422549" [549e105d-d2e7-42b6-ae48-098d590e7b1d] Running
	I1227 20:16:16.623713  337106 system_pods.go:89] "kube-scheduler-ha-422549-m02" [db9187da-87a8-4b73-baea-76f3d9ef35c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:16:16.623737  337106 system_pods.go:89] "kube-scheduler-ha-422549-m03" [2a6b70b3-5303-404f-8b1d-1a65b9b81555] Running
	I1227 20:16:16.623769  337106 system_pods.go:89] "kube-vip-ha-422549" [32d647ce-90ed-4f56-b4c8-7ed445019d88] Running
	I1227 20:16:16.623794  337106 system_pods.go:89] "kube-vip-ha-422549-m02" [ddde9374-24b7-498d-b829-6902c612b272] Running
	I1227 20:16:16.623818  337106 system_pods.go:89] "kube-vip-ha-422549-m03" [39a60c56-1bf0-4232-9af0-f55e0c66a33d] Running
	I1227 20:16:16.623842  337106 system_pods.go:89] "storage-provisioner" [0d645eab-223f-4dd6-9518-6ab4a21d4c09] Running
	I1227 20:16:16.623877  337106 system_pods.go:126] duration metric: took 33.567641ms to wait for k8s-apps to be running ...
	I1227 20:16:16.623905  337106 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:16:16.623994  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:16:16.670311  337106 system_svc.go:56] duration metric: took 46.39668ms WaitForService to wait for kubelet
	I1227 20:16:16.670384  337106 kubeadm.go:587] duration metric: took 18.324769156s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:16:16.670417  337106 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:16:16.708894  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:16.708992  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:16.709018  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:16.709039  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:16.709068  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:16.709094  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:16.709113  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:16.709132  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:16.709151  337106 node_conditions.go:105] duration metric: took 38.715442ms to run NodePressure ...
	I1227 20:16:16.709184  337106 start.go:242] waiting for startup goroutines ...
	I1227 20:16:16.709228  337106 start.go:256] writing updated cluster config ...
	I1227 20:16:16.713916  337106 out.go:203] 
	I1227 20:16:16.723292  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:16.723425  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:16.727142  337106 out.go:179] * Starting "ha-422549-m03" control-plane node in "ha-422549" cluster
	I1227 20:16:16.732478  337106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:16:16.735844  337106 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:16:16.739409  337106 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:16:16.739458  337106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:16:16.739659  337106 cache.go:65] Caching tarball of preloaded images
	I1227 20:16:16.739753  337106 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:16:16.739768  337106 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:16:16.739908  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:16.767918  337106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:16:16.767942  337106 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:16:16.767957  337106 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:16:16.767980  337106 start.go:360] acquireMachinesLock for ha-422549-m03: {Name:mkf062d56fcf026ae5cb73bd2d2d3016f0f6c481 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:16:16.768043  337106 start.go:364] duration metric: took 41.697µs to acquireMachinesLock for "ha-422549-m03"
	I1227 20:16:16.768068  337106 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:16:16.768074  337106 fix.go:54] fixHost starting: m03
	I1227 20:16:16.768352  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m03 --format={{.State.Status}}
	I1227 20:16:16.790621  337106 fix.go:112] recreateIfNeeded on ha-422549-m03: state=Stopped err=<nil>
	W1227 20:16:16.790653  337106 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:16:16.794891  337106 out.go:252] * Restarting existing docker container for "ha-422549-m03" ...
	I1227 20:16:16.794974  337106 cli_runner.go:164] Run: docker start ha-422549-m03
	I1227 20:16:17.149956  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m03 --format={{.State.Status}}
	I1227 20:16:17.174958  337106 kic.go:430] container "ha-422549-m03" state is running.
	I1227 20:16:17.175307  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m03
	I1227 20:16:17.213633  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:17.213863  337106 machine.go:94] provisionDockerMachine start ...
	I1227 20:16:17.213929  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:17.241742  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:17.242041  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1227 20:16:17.242056  337106 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:16:17.242635  337106 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 20:16:20.405227  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m03
	
	I1227 20:16:20.405265  337106 ubuntu.go:182] provisioning hostname "ha-422549-m03"
	I1227 20:16:20.405335  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:20.447382  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:20.447685  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1227 20:16:20.447702  337106 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549-m03 && echo "ha-422549-m03" | sudo tee /etc/hostname
	I1227 20:16:20.641581  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m03
	
	I1227 20:16:20.641669  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:20.671096  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:20.671417  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1227 20:16:20.671491  337106 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:16:20.825909  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:16:20.825934  337106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:16:20.825963  337106 ubuntu.go:190] setting up certificates
	I1227 20:16:20.825973  337106 provision.go:84] configureAuth start
	I1227 20:16:20.826043  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m03
	I1227 20:16:20.848683  337106 provision.go:143] copyHostCerts
	I1227 20:16:20.848722  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:16:20.848751  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:16:20.848757  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:16:20.848829  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:16:20.848936  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:16:20.848954  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:16:20.848959  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:16:20.848987  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:16:20.849035  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:16:20.849051  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:16:20.849055  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:16:20.849079  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:16:20.849139  337106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549-m03 san=[127.0.0.1 192.168.49.4 ha-422549-m03 localhost minikube]
	I1227 20:16:20.958713  337106 provision.go:177] copyRemoteCerts
	I1227 20:16:20.958777  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:16:20.958919  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:20.978456  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m03/id_rsa Username:docker}
	I1227 20:16:21.097778  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:16:21.097855  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:16:21.118223  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:16:21.118280  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 20:16:21.171526  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:16:21.171643  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:16:21.238272  337106 provision.go:87] duration metric: took 412.285774ms to configureAuth
	I1227 20:16:21.238317  337106 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:16:21.238586  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:21.238711  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:21.261112  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:21.261428  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1227 20:16:21.261479  337106 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:16:22.736503  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:16:22.736545  337106 machine.go:97] duration metric: took 5.522665605s to provisionDockerMachine
	I1227 20:16:22.736559  337106 start.go:293] postStartSetup for "ha-422549-m03" (driver="docker")
	I1227 20:16:22.736569  337106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:16:22.736631  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:16:22.736681  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:22.757560  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m03/id_rsa Username:docker}
	I1227 20:16:22.872943  337106 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:16:22.877107  337106 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:16:22.877150  337106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:16:22.877162  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:16:22.877224  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:16:22.877310  337106 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:16:22.877323  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:16:22.877568  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:16:22.887508  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:16:22.935543  337106 start.go:296] duration metric: took 198.968452ms for postStartSetup
	I1227 20:16:22.935675  337106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:16:22.935751  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:22.962394  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m03/id_rsa Username:docker}
	I1227 20:16:23.086315  337106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:16:23.098060  337106 fix.go:56] duration metric: took 6.329978316s for fixHost
	I1227 20:16:23.098095  337106 start.go:83] releasing machines lock for "ha-422549-m03", held for 6.330038441s
	I1227 20:16:23.098169  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m03
	I1227 20:16:23.127385  337106 out.go:179] * Found network options:
	I1227 20:16:23.130521  337106 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1227 20:16:23.133556  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:23.133603  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:23.133636  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:23.133648  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 20:16:23.133723  337106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:16:23.133754  337106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:16:23.133766  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:23.133843  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:16:23.174788  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m03/id_rsa Username:docker}
	I1227 20:16:23.176337  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m03/id_rsa Username:docker}
	I1227 20:16:23.532310  337106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:16:23.539423  337106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:16:23.539508  337106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:16:23.547781  337106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:16:23.547805  337106 start.go:496] detecting cgroup driver to use...
	I1227 20:16:23.547836  337106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:16:23.547889  337106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:16:23.564242  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:16:23.579653  337106 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:16:23.579767  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:16:23.598176  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:16:23.613182  337106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:16:23.877595  337106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:16:24.169571  337106 docker.go:234] disabling docker service ...
	I1227 20:16:24.169685  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:16:24.197205  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:16:24.211488  337106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:16:24.466324  337106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:16:24.716660  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:16:24.734029  337106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:16:24.758554  337106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:16:24.758647  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.777034  337106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:16:24.777106  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.791147  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.805710  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.818822  337106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:16:24.828018  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.843848  337106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.852557  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:24.865822  337106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:16:24.881844  337106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:16:24.890467  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:25.116336  337106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:16:26.436202  337106 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.319834137s)
	I1227 20:16:26.436227  337106 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:16:26.436285  337106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:16:26.440409  337106 start.go:574] Will wait 60s for crictl version
	I1227 20:16:26.440474  337106 ssh_runner.go:195] Run: which crictl
	I1227 20:16:26.444800  337106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:16:26.475048  337106 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:16:26.475137  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:16:26.509827  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:16:26.549254  337106 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:16:26.552189  337106 out.go:179]   - env NO_PROXY=192.168.49.2
	I1227 20:16:26.555166  337106 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1227 20:16:26.558176  337106 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:16:26.575734  337106 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:16:26.580184  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:16:26.590410  337106 mustload.go:66] Loading cluster: ha-422549
	I1227 20:16:26.590667  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:26.590918  337106 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:16:26.608326  337106 host.go:66] Checking if "ha-422549" exists ...
	I1227 20:16:26.608672  337106 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.4
	I1227 20:16:26.608684  337106 certs.go:195] generating shared ca certs ...
	I1227 20:16:26.608708  337106 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:16:26.608822  337106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:16:26.608870  337106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:16:26.608877  337106 certs.go:257] generating profile certs ...
	I1227 20:16:26.608966  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key
	I1227 20:16:26.609032  337106 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key.d8cf7377
	I1227 20:16:26.609078  337106 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key
	I1227 20:16:26.609087  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:16:26.609099  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:16:26.609109  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:16:26.609121  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:16:26.609131  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:16:26.609142  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:16:26.609153  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:16:26.609163  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:16:26.609238  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:16:26.609270  337106 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:16:26.609278  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:16:26.609540  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:16:26.609594  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:16:26.609622  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:16:26.609673  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:16:26.609705  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:26.609718  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:16:26.609729  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:16:26.609784  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:16:26.627281  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:16:26.717750  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1227 20:16:26.722194  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1227 20:16:26.732379  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1227 20:16:26.736107  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1227 20:16:26.744795  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1227 20:16:26.748608  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1227 20:16:26.757298  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1227 20:16:26.760963  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1227 20:16:26.770282  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1227 20:16:26.774405  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1227 20:16:26.782912  337106 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1227 20:16:26.787280  337106 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1227 20:16:26.796054  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:16:26.815746  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:16:26.833735  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:16:26.852956  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:16:26.873558  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:16:26.893781  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:16:26.912114  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:16:26.930067  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:16:26.954144  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:16:26.992095  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:16:27.032398  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:16:27.058957  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1227 20:16:27.082646  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1227 20:16:27.099055  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1227 20:16:27.114942  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1227 20:16:27.128524  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1227 20:16:27.143949  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1227 20:16:27.166895  337106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (728 bytes)
	I1227 20:16:27.189731  337106 ssh_runner.go:195] Run: openssl version
	I1227 20:16:27.199330  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:27.207176  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:16:27.215001  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:27.218816  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:27.218944  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:27.262656  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:16:27.270122  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:16:27.278066  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:16:27.286224  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:16:27.290216  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:16:27.290299  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:16:27.331583  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:16:27.339149  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:16:27.347443  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:16:27.354941  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:16:27.358541  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:16:27.358644  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:16:27.401369  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:16:27.408555  337106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:16:27.412327  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:16:27.452918  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:16:27.493668  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:16:27.534423  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:16:27.575645  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:16:27.617601  337106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:16:27.658239  337106 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.35.0 crio true true} ...
	I1227 20:16:27.658389  337106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:16:27.658424  337106 kube-vip.go:115] generating kube-vip config ...
	I1227 20:16:27.658480  337106 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 20:16:27.670482  337106 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:16:27.670542  337106 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 20:16:27.670611  337106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:16:27.678382  337106 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:16:27.678493  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1227 20:16:27.688057  337106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 20:16:27.702120  337106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:16:27.721182  337106 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 20:16:27.736629  337106 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:16:27.740129  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:16:27.750576  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:27.920085  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:16:27.936290  337106 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:16:27.936639  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:27.941595  337106 out.go:179] * Verifying Kubernetes components...
	I1227 20:16:27.944502  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:28.098929  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:16:28.115947  337106 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1227 20:16:28.116063  337106 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1227 20:16:28.116301  337106 node_ready.go:35] waiting up to 6m0s for node "ha-422549-m03" to be "Ready" ...
	W1227 20:16:30.121347  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	W1227 20:16:32.620007  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	W1227 20:16:34.620221  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	W1227 20:16:36.620631  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	W1227 20:16:38.620914  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	W1227 20:16:41.119914  337106 node_ready.go:57] node "ha-422549-m03" has "Ready":"Unknown" status (will retry)
	I1227 20:16:42.138199  337106 node_ready.go:49] node "ha-422549-m03" is "Ready"
	I1227 20:16:42.138234  337106 node_ready.go:38] duration metric: took 14.021894093s for node "ha-422549-m03" to be "Ready" ...
	I1227 20:16:42.138250  337106 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:16:42.138320  337106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:16:42.201875  337106 api_server.go:72] duration metric: took 14.265538166s to wait for apiserver process to appear ...
	I1227 20:16:42.201905  337106 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:16:42.201928  337106 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 20:16:42.211305  337106 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1227 20:16:42.217811  337106 api_server.go:141] control plane version: v1.35.0
	I1227 20:16:42.217842  337106 api_server.go:131] duration metric: took 15.928834ms to wait for apiserver health ...
	I1227 20:16:42.217852  337106 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:16:42.235518  337106 system_pods.go:59] 26 kube-system pods found
	I1227 20:16:42.235637  337106 system_pods.go:61] "coredns-7d764666f9-mf5xw" [5a7f58c2-f991-46f0-9ece-9a561d53d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:42.235688  337106 system_pods.go:61] "coredns-7d764666f9-n5d9d" [159febfd-c1e4-4897-a372-59e4a3069914] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:42.235725  337106 system_pods.go:61] "etcd-ha-422549" [8f26f563-e734-4add-aefe-484f0e873a1e] Running
	I1227 20:16:42.235747  337106 system_pods.go:61] "etcd-ha-422549-m02" [5fed7e48-07c4-4a07-b63b-0fccbd196f6f] Running
	I1227 20:16:42.235772  337106 system_pods.go:61] "etcd-ha-422549-m03" [d22f78a1-2f4c-41e6-b65a-bf7108686c71] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:16:42.235810  337106 system_pods.go:61] "kindnet-28svl" [1494f795-941f-418e-8090-098225eb9c6a] Running
	I1227 20:16:42.235843  337106 system_pods.go:61] "kindnet-4hl7v" [ea2cc8a1-df16-440c-a093-a5d915b249b4] Running
	I1227 20:16:42.235869  337106 system_pods.go:61] "kindnet-5wczs" [df3d7298-4140-464f-a6e8-c614e1683488] Running
	I1227 20:16:42.235899  337106 system_pods.go:61] "kindnet-qkqmv" [66d834ae-af1b-456d-ae48-8a0d6608f961] Running
	I1227 20:16:42.235929  337106 system_pods.go:61] "kube-apiserver-ha-422549" [14f8e794-2ba7-477d-806b-03dd5a33d868] Running
	I1227 20:16:42.235961  337106 system_pods.go:61] "kube-apiserver-ha-422549-m02" [a4b97cc6-26ef-4d46-9ef9-bdee08eb89d6] Running
	I1227 20:16:42.235997  337106 system_pods.go:61] "kube-apiserver-ha-422549-m03" [71f23288-3e33-4bc8-9182-08c190ae026f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:16:42.236045  337106 system_pods.go:61] "kube-controller-manager-ha-422549" [b69af60f-4eac-4e85-aa81-66b7616a46f6] Running
	I1227 20:16:42.236083  337106 system_pods.go:61] "kube-controller-manager-ha-422549-m02" [07c0e68f-76e5-4cee-92a2-05dd2fb4c3e2] Running
	I1227 20:16:42.236112  337106 system_pods.go:61] "kube-controller-manager-ha-422549-m03" [af291694-2986-455c-8588-c2879d10ff3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:42.236140  337106 system_pods.go:61] "kube-proxy-cg4z5" [42f74e61-eb67-4d02-8f08-f77f7163f5fc] Running
	I1227 20:16:42.236179  337106 system_pods.go:61] "kube-proxy-kscg6" [baa716d5-546a-4922-ba51-fe1116e36c75] Running
	I1227 20:16:42.236206  337106 system_pods.go:61] "kube-proxy-mhmmn" [d69029af-1fc4-4a31-913e-92e1231e845a] Running
	I1227 20:16:42.236231  337106 system_pods.go:61] "kube-proxy-nqr7h" [d0fc3ef5-765a-4376-94e6-42237908d3fd] Running
	I1227 20:16:42.236262  337106 system_pods.go:61] "kube-scheduler-ha-422549" [549e105d-d2e7-42b6-ae48-098d590e7b1d] Running
	I1227 20:16:42.236297  337106 system_pods.go:61] "kube-scheduler-ha-422549-m02" [db9187da-87a8-4b73-baea-76f3d9ef35c7] Running
	I1227 20:16:42.236326  337106 system_pods.go:61] "kube-scheduler-ha-422549-m03" [2a6b70b3-5303-404f-8b1d-1a65b9b81555] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:16:42.236352  337106 system_pods.go:61] "kube-vip-ha-422549" [32d647ce-90ed-4f56-b4c8-7ed445019d88] Running
	I1227 20:16:42.236391  337106 system_pods.go:61] "kube-vip-ha-422549-m02" [ddde9374-24b7-498d-b829-6902c612b272] Running
	I1227 20:16:42.236414  337106 system_pods.go:61] "kube-vip-ha-422549-m03" [39a60c56-1bf0-4232-9af0-f55e0c66a33d] Running
	I1227 20:16:42.236441  337106 system_pods.go:61] "storage-provisioner" [0d645eab-223f-4dd6-9518-6ab4a21d4c09] Running
	I1227 20:16:42.236483  337106 system_pods.go:74] duration metric: took 18.617239ms to wait for pod list to return data ...
	I1227 20:16:42.236522  337106 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:16:42.247926  337106 default_sa.go:45] found service account: "default"
	I1227 20:16:42.248004  337106 default_sa.go:55] duration metric: took 11.459641ms for default service account to be created ...
	I1227 20:16:42.248030  337106 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:16:42.261989  337106 system_pods.go:86] 26 kube-system pods found
	I1227 20:16:42.262126  337106 system_pods.go:89] "coredns-7d764666f9-mf5xw" [5a7f58c2-f991-46f0-9ece-9a561d53d25f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:42.262177  337106 system_pods.go:89] "coredns-7d764666f9-n5d9d" [159febfd-c1e4-4897-a372-59e4a3069914] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:16:42.262207  337106 system_pods.go:89] "etcd-ha-422549" [8f26f563-e734-4add-aefe-484f0e873a1e] Running
	I1227 20:16:42.262236  337106 system_pods.go:89] "etcd-ha-422549-m02" [5fed7e48-07c4-4a07-b63b-0fccbd196f6f] Running
	I1227 20:16:42.262283  337106 system_pods.go:89] "etcd-ha-422549-m03" [d22f78a1-2f4c-41e6-b65a-bf7108686c71] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:16:42.262312  337106 system_pods.go:89] "kindnet-28svl" [1494f795-941f-418e-8090-098225eb9c6a] Running
	I1227 20:16:42.262338  337106 system_pods.go:89] "kindnet-4hl7v" [ea2cc8a1-df16-440c-a093-a5d915b249b4] Running
	I1227 20:16:42.262359  337106 system_pods.go:89] "kindnet-5wczs" [df3d7298-4140-464f-a6e8-c614e1683488] Running
	I1227 20:16:42.262394  337106 system_pods.go:89] "kindnet-qkqmv" [66d834ae-af1b-456d-ae48-8a0d6608f961] Running
	I1227 20:16:42.262426  337106 system_pods.go:89] "kube-apiserver-ha-422549" [14f8e794-2ba7-477d-806b-03dd5a33d868] Running
	I1227 20:16:42.262449  337106 system_pods.go:89] "kube-apiserver-ha-422549-m02" [a4b97cc6-26ef-4d46-9ef9-bdee08eb89d6] Running
	I1227 20:16:42.262479  337106 system_pods.go:89] "kube-apiserver-ha-422549-m03" [71f23288-3e33-4bc8-9182-08c190ae026f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:16:42.262522  337106 system_pods.go:89] "kube-controller-manager-ha-422549" [b69af60f-4eac-4e85-aa81-66b7616a46f6] Running
	I1227 20:16:42.262568  337106 system_pods.go:89] "kube-controller-manager-ha-422549-m02" [07c0e68f-76e5-4cee-92a2-05dd2fb4c3e2] Running
	I1227 20:16:42.262604  337106 system_pods.go:89] "kube-controller-manager-ha-422549-m03" [af291694-2986-455c-8588-c2879d10ff3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:16:42.262654  337106 system_pods.go:89] "kube-proxy-cg4z5" [42f74e61-eb67-4d02-8f08-f77f7163f5fc] Running
	I1227 20:16:42.262691  337106 system_pods.go:89] "kube-proxy-kscg6" [baa716d5-546a-4922-ba51-fe1116e36c75] Running
	I1227 20:16:42.262719  337106 system_pods.go:89] "kube-proxy-mhmmn" [d69029af-1fc4-4a31-913e-92e1231e845a] Running
	I1227 20:16:42.262764  337106 system_pods.go:89] "kube-proxy-nqr7h" [d0fc3ef5-765a-4376-94e6-42237908d3fd] Running
	I1227 20:16:42.262793  337106 system_pods.go:89] "kube-scheduler-ha-422549" [549e105d-d2e7-42b6-ae48-098d590e7b1d] Running
	I1227 20:16:42.262821  337106 system_pods.go:89] "kube-scheduler-ha-422549-m02" [db9187da-87a8-4b73-baea-76f3d9ef35c7] Running
	I1227 20:16:42.262867  337106 system_pods.go:89] "kube-scheduler-ha-422549-m03" [2a6b70b3-5303-404f-8b1d-1a65b9b81555] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:16:42.262896  337106 system_pods.go:89] "kube-vip-ha-422549" [32d647ce-90ed-4f56-b4c8-7ed445019d88] Running
	I1227 20:16:42.262923  337106 system_pods.go:89] "kube-vip-ha-422549-m02" [ddde9374-24b7-498d-b829-6902c612b272] Running
	I1227 20:16:42.262973  337106 system_pods.go:89] "kube-vip-ha-422549-m03" [39a60c56-1bf0-4232-9af0-f55e0c66a33d] Running
	I1227 20:16:42.263009  337106 system_pods.go:89] "storage-provisioner" [0d645eab-223f-4dd6-9518-6ab4a21d4c09] Running
	I1227 20:16:42.263038  337106 system_pods.go:126] duration metric: took 14.987495ms to wait for k8s-apps to be running ...
	I1227 20:16:42.263064  337106 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:16:42.263186  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:16:42.329952  337106 system_svc.go:56] duration metric: took 66.879518ms WaitForService to wait for kubelet
	I1227 20:16:42.330045  337106 kubeadm.go:587] duration metric: took 14.393713186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:16:42.330082  337106 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:16:42.334874  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:42.334956  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:42.334985  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:42.335008  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:42.335041  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:42.335069  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:42.335090  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:42.335112  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:42.335144  337106 node_conditions.go:105] duration metric: took 5.018461ms to run NodePressure ...
	I1227 20:16:42.335178  337106 start.go:242] waiting for startup goroutines ...
	I1227 20:16:42.335217  337106 start.go:256] writing updated cluster config ...
	I1227 20:16:42.338858  337106 out.go:203] 
	I1227 20:16:42.342208  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:42.342412  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:42.346339  337106 out.go:179] * Starting "ha-422549-m04" worker node in "ha-422549" cluster
	I1227 20:16:42.350180  337106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:16:42.353431  337106 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:16:42.356594  337106 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:16:42.356748  337106 cache.go:65] Caching tarball of preloaded images
	I1227 20:16:42.356702  337106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:16:42.357174  337106 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:16:42.357212  337106 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:16:42.357376  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:42.393103  337106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:16:42.393129  337106 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:16:42.393143  337106 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:16:42.393176  337106 start.go:360] acquireMachinesLock for ha-422549-m04: {Name:mk6b025464d8c3992b9046b379a06dcb477a1541 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:16:42.393245  337106 start.go:364] duration metric: took 45.324µs to acquireMachinesLock for "ha-422549-m04"
	I1227 20:16:42.393264  337106 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:16:42.393270  337106 fix.go:54] fixHost starting: m04
	I1227 20:16:42.393757  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m04 --format={{.State.Status}}
	I1227 20:16:42.411553  337106 fix.go:112] recreateIfNeeded on ha-422549-m04: state=Stopped err=<nil>
	W1227 20:16:42.411578  337106 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:16:42.414835  337106 out.go:252] * Restarting existing docker container for "ha-422549-m04" ...
	I1227 20:16:42.414929  337106 cli_runner.go:164] Run: docker start ha-422549-m04
	I1227 20:16:42.767967  337106 cli_runner.go:164] Run: docker container inspect ha-422549-m04 --format={{.State.Status}}
	I1227 20:16:42.792044  337106 kic.go:430] container "ha-422549-m04" state is running.
	I1227 20:16:42.792404  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m04
	I1227 20:16:42.827351  337106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/config.json ...
	I1227 20:16:42.827599  337106 machine.go:94] provisionDockerMachine start ...
	I1227 20:16:42.827669  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:42.865289  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:42.865636  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 20:16:42.865647  337106 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:16:42.866300  337106 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43686->127.0.0.1:33198: read: connection reset by peer
	I1227 20:16:46.033368  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m04
	
	I1227 20:16:46.033393  337106 ubuntu.go:182] provisioning hostname "ha-422549-m04"
	I1227 20:16:46.033521  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:46.061318  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:46.061712  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 20:16:46.061729  337106 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-422549-m04 && echo "ha-422549-m04" | sudo tee /etc/hostname
	I1227 20:16:46.247170  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-422549-m04
	
	I1227 20:16:46.247258  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:46.267833  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:46.268212  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 20:16:46.268238  337106 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-422549-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-422549-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-422549-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:16:46.421793  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:16:46.421817  337106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:16:46.421834  337106 ubuntu.go:190] setting up certificates
	I1227 20:16:46.421844  337106 provision.go:84] configureAuth start
	I1227 20:16:46.421907  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m04
	I1227 20:16:46.450717  337106 provision.go:143] copyHostCerts
	I1227 20:16:46.450775  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:16:46.450808  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:16:46.450827  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:16:46.450912  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:16:46.450998  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:16:46.451024  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:16:46.451029  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:16:46.451060  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:16:46.451106  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:16:46.451128  337106 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:16:46.451133  337106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:16:46.451165  337106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:16:46.451217  337106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.ha-422549-m04 san=[127.0.0.1 192.168.49.5 ha-422549-m04 localhost minikube]
	I1227 20:16:46.849291  337106 provision.go:177] copyRemoteCerts
	I1227 20:16:46.849383  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:16:46.849466  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:46.871414  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m04/id_rsa Username:docker}
	I1227 20:16:46.969387  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:16:46.969501  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:16:46.998452  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:16:46.998518  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 20:16:47.021097  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:16:47.021160  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:16:47.040293  337106 provision.go:87] duration metric: took 618.436373ms to configureAuth
	I1227 20:16:47.040318  337106 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:16:47.040553  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:47.040650  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:47.060413  337106 main.go:144] libmachine: Using SSH client type: native
	I1227 20:16:47.060713  337106 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 20:16:47.060726  337106 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:16:47.416575  337106 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:16:47.416595  337106 machine.go:97] duration metric: took 4.588981536s to provisionDockerMachine
	I1227 20:16:47.416607  337106 start.go:293] postStartSetup for "ha-422549-m04" (driver="docker")
	I1227 20:16:47.416618  337106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:16:47.416709  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:16:47.416753  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:47.436074  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m04/id_rsa Username:docker}
	I1227 20:16:47.541369  337106 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:16:47.545584  337106 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:16:47.545615  337106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:16:47.545627  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:16:47.545689  337106 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:16:47.545788  337106 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:16:47.545802  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /etc/ssl/certs/2743362.pem
	I1227 20:16:47.545901  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:16:47.553680  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:16:47.574171  337106 start.go:296] duration metric: took 157.548886ms for postStartSetup
	I1227 20:16:47.574295  337106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:16:47.574343  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:47.591734  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m04/id_rsa Username:docker}
	I1227 20:16:47.691874  337106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:16:47.696839  337106 fix.go:56] duration metric: took 5.303562652s for fixHost
	I1227 20:16:47.696874  337106 start.go:83] releasing machines lock for "ha-422549-m04", held for 5.303620217s
	I1227 20:16:47.696941  337106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m04
	I1227 20:16:47.722974  337106 out.go:179] * Found network options:
	I1227 20:16:47.725907  337106 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1227 20:16:47.728701  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:47.728735  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:47.728747  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:47.728789  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:47.728805  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 20:16:47.728815  337106 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 20:16:47.728903  337106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:16:47.728946  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:47.729221  337106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:16:47.729281  337106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:16:47.750771  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m04/id_rsa Username:docker}
	I1227 20:16:47.772821  337106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m04/id_rsa Username:docker}
	I1227 20:16:47.915331  337106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:16:47.990713  337106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:16:47.990795  337106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:16:48.000448  337106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:16:48.000481  337106 start.go:496] detecting cgroup driver to use...
	I1227 20:16:48.000514  337106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:16:48.000573  337106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:16:48.021384  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:16:48.039922  337106 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:16:48.040026  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:16:48.062813  337106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:16:48.079604  337106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:16:48.252416  337106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:16:48.379968  337106 docker.go:234] disabling docker service ...
	I1227 20:16:48.380079  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:16:48.396866  337106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:16:48.412804  337106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:16:48.580976  337106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:16:48.708477  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:16:48.723957  337106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:16:48.740271  337106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:16:48.740353  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.751954  337106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:16:48.752031  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.770376  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.788562  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.800161  337106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:16:48.809833  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.820365  337106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.838111  337106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:16:48.851461  337106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:16:48.859082  337106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:16:48.867125  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:49.040301  337106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:16:49.267978  337106 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:16:49.268078  337106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:16:49.275575  337106 start.go:574] Will wait 60s for crictl version
	I1227 20:16:49.275679  337106 ssh_runner.go:195] Run: which crictl
	I1227 20:16:49.281419  337106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:16:49.315494  337106 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:16:49.315644  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:16:49.369281  337106 ssh_runner.go:195] Run: crio --version
	I1227 20:16:49.404637  337106 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:16:49.407552  337106 out.go:179]   - env NO_PROXY=192.168.49.2
	I1227 20:16:49.411293  337106 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1227 20:16:49.414211  337106 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1227 20:16:49.417170  337106 cli_runner.go:164] Run: docker network inspect ha-422549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:16:49.439158  337106 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 20:16:49.443392  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:16:49.460241  337106 mustload.go:66] Loading cluster: ha-422549
	I1227 20:16:49.460498  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:49.460747  337106 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:16:49.491043  337106 host.go:66] Checking if "ha-422549" exists ...
	I1227 20:16:49.491329  337106 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549 for IP: 192.168.49.5
	I1227 20:16:49.491337  337106 certs.go:195] generating shared ca certs ...
	I1227 20:16:49.491350  337106 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:16:49.491459  337106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:16:49.491497  337106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:16:49.491508  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:16:49.491519  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:16:49.491530  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:16:49.491540  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:16:49.491593  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:16:49.491624  337106 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:16:49.491632  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:16:49.491659  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:16:49.491683  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:16:49.491705  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:16:49.491748  337106 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:16:49.491776  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> /usr/share/ca-certificates/2743362.pem
	I1227 20:16:49.491789  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:49.491812  337106 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem -> /usr/share/ca-certificates/274336.pem
	I1227 20:16:49.491829  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:16:49.515784  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:16:49.544429  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:16:49.565837  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:16:49.591774  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:16:49.613222  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:16:49.642392  337106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:16:49.671654  337106 ssh_runner.go:195] Run: openssl version
	I1227 20:16:49.680550  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:16:49.689578  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:16:49.699039  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:16:49.704553  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:16:49.704616  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:16:49.749850  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:16:49.758256  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:49.766307  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:16:49.776970  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:49.780927  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:49.781029  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:16:49.822773  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:16:49.830459  337106 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:16:49.838202  337106 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:16:49.847286  337106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:16:49.851257  337106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:16:49.851323  337106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:16:49.895472  337106 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:16:49.903822  337106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:16:49.907501  337106 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:16:49.907548  337106 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.35.0 crio false true} ...
	I1227 20:16:49.907686  337106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-422549-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-422549 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:16:49.907776  337106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:16:49.915527  337106 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:16:49.915638  337106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1227 20:16:49.923067  337106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 20:16:49.936470  337106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:16:49.951403  337106 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 20:16:49.955422  337106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:16:49.965541  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:50.111024  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:16:50.130778  337106 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1227 20:16:50.131217  337106 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:16:50.136553  337106 out.go:179] * Verifying Kubernetes components...
	I1227 20:16:50.139597  337106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:16:50.312113  337106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:16:50.327943  337106 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1227 20:16:50.328030  337106 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1227 20:16:50.328306  337106 node_ready.go:35] waiting up to 6m0s for node "ha-422549-m04" to be "Ready" ...
	I1227 20:16:51.834080  337106 node_ready.go:49] node "ha-422549-m04" is "Ready"
	I1227 20:16:51.834112  337106 node_ready.go:38] duration metric: took 1.505787179s for node "ha-422549-m04" to be "Ready" ...
	I1227 20:16:51.834136  337106 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:16:51.834194  337106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:16:51.847783  337106 system_svc.go:56] duration metric: took 13.639755ms WaitForService to wait for kubelet
	I1227 20:16:51.847815  337106 kubeadm.go:587] duration metric: took 1.71699582s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:16:51.847835  337106 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:16:51.851110  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:51.851141  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:51.851154  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:51.851159  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:51.851164  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:51.851171  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:51.851174  337106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:16:51.851178  337106 node_conditions.go:123] node cpu capacity is 2
	I1227 20:16:51.851184  337106 node_conditions.go:105] duration metric: took 3.342441ms to run NodePressure ...
	I1227 20:16:51.851198  337106 start.go:242] waiting for startup goroutines ...
	I1227 20:16:51.851223  337106 start.go:256] writing updated cluster config ...
	I1227 20:16:51.851550  337106 ssh_runner.go:195] Run: rm -f paused
	I1227 20:16:51.855763  337106 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:16:51.856293  337106 kapi.go:59] client config for ha-422549: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/ha-422549/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 20:16:51.875834  337106 pod_ready.go:83] waiting for pod "coredns-7d764666f9-mf5xw" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 20:16:53.883849  337106 pod_ready.go:104] pod "coredns-7d764666f9-mf5xw" is not "Ready", error: <nil>
	W1227 20:16:56.461572  337106 pod_ready.go:104] pod "coredns-7d764666f9-mf5xw" is not "Ready", error: <nil>
	I1227 20:16:56.881855  337106 pod_ready.go:94] pod "coredns-7d764666f9-mf5xw" is "Ready"
	I1227 20:16:56.881886  337106 pod_ready.go:86] duration metric: took 5.006014091s for pod "coredns-7d764666f9-mf5xw" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.881896  337106 pod_ready.go:83] waiting for pod "coredns-7d764666f9-n5d9d" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.887788  337106 pod_ready.go:94] pod "coredns-7d764666f9-n5d9d" is "Ready"
	I1227 20:16:56.887818  337106 pod_ready.go:86] duration metric: took 5.91483ms for pod "coredns-7d764666f9-n5d9d" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.891258  337106 pod_ready.go:83] waiting for pod "etcd-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.898397  337106 pod_ready.go:94] pod "etcd-ha-422549" is "Ready"
	I1227 20:16:56.898437  337106 pod_ready.go:86] duration metric: took 7.137144ms for pod "etcd-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.898449  337106 pod_ready.go:83] waiting for pod "etcd-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.906314  337106 pod_ready.go:94] pod "etcd-ha-422549-m02" is "Ready"
	I1227 20:16:56.906341  337106 pod_ready.go:86] duration metric: took 7.885849ms for pod "etcd-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:56.906352  337106 pod_ready.go:83] waiting for pod "etcd-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:57.076308  337106 request.go:683] "Waited before sending request" delay="167.221744ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m03"
	I1227 20:16:57.080536  337106 pod_ready.go:94] pod "etcd-ha-422549-m03" is "Ready"
	I1227 20:16:57.080564  337106 pod_ready.go:86] duration metric: took 174.205244ms for pod "etcd-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:57.276888  337106 request.go:683] "Waited before sending request" delay="196.187905ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1227 20:16:57.280390  337106 pod_ready.go:83] waiting for pod "kube-apiserver-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:57.476826  337106 request.go:683] "Waited before sending request" delay="196.340204ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-422549"
	I1227 20:16:57.677055  337106 request.go:683] "Waited before sending request" delay="195.372363ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549"
	I1227 20:16:57.680148  337106 pod_ready.go:94] pod "kube-apiserver-ha-422549" is "Ready"
	I1227 20:16:57.680173  337106 pod_ready.go:86] duration metric: took 399.753981ms for pod "kube-apiserver-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:57.680183  337106 pod_ready.go:83] waiting for pod "kube-apiserver-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:57.876636  337106 request.go:683] "Waited before sending request" delay="196.366115ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-422549-m02"
	I1227 20:16:58.076883  337106 request.go:683] "Waited before sending request" delay="195.240889ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m02"
	I1227 20:16:58.081595  337106 pod_ready.go:94] pod "kube-apiserver-ha-422549-m02" is "Ready"
	I1227 20:16:58.081624  337106 pod_ready.go:86] duration metric: took 401.434113ms for pod "kube-apiserver-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:58.081636  337106 pod_ready.go:83] waiting for pod "kube-apiserver-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:58.277078  337106 request.go:683] "Waited before sending request" delay="195.329053ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-422549-m03"
	I1227 20:16:58.476156  337106 request.go:683] "Waited before sending request" delay="193.265737ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m03"
	I1227 20:16:58.479583  337106 pod_ready.go:94] pod "kube-apiserver-ha-422549-m03" is "Ready"
	I1227 20:16:58.479609  337106 pod_ready.go:86] duration metric: took 397.939042ms for pod "kube-apiserver-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:58.677038  337106 request.go:683] "Waited before sending request" delay="197.311256ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1227 20:16:58.680893  337106 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:58.876237  337106 request.go:683] "Waited before sending request" delay="195.249704ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-422549"
	I1227 20:16:59.076160  337106 request.go:683] "Waited before sending request" delay="194.26927ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549"
	I1227 20:16:59.079502  337106 pod_ready.go:94] pod "kube-controller-manager-ha-422549" is "Ready"
	I1227 20:16:59.079531  337106 pod_ready.go:86] duration metric: took 398.612222ms for pod "kube-controller-manager-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:59.079542  337106 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:59.276926  337106 request.go:683] "Waited before sending request" delay="197.310947ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-422549-m02"
	I1227 20:16:59.476987  337106 request.go:683] "Waited before sending request" delay="195.346795ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m02"
	I1227 20:16:59.480256  337106 pod_ready.go:94] pod "kube-controller-manager-ha-422549-m02" is "Ready"
	I1227 20:16:59.480288  337106 pod_ready.go:86] duration metric: took 400.738794ms for pod "kube-controller-manager-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:59.480298  337106 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:16:59.676709  337106 request.go:683] "Waited before sending request" delay="196.313782ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-422549-m03"
	I1227 20:16:59.876936  337106 request.go:683] "Waited before sending request" delay="194.422474ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m03"
	I1227 20:16:59.880871  337106 pod_ready.go:94] pod "kube-controller-manager-ha-422549-m03" is "Ready"
	I1227 20:16:59.880898  337106 pod_ready.go:86] duration metric: took 400.592723ms for pod "kube-controller-manager-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:00.077121  337106 request.go:683] "Waited before sending request" delay="196.103919ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1227 20:17:00.089664  337106 pod_ready.go:83] waiting for pod "kube-proxy-cg4z5" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:00.277067  337106 request.go:683] "Waited before sending request" delay="187.22976ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cg4z5"
	I1227 20:17:00.476439  337106 request.go:683] "Waited before sending request" delay="191.18971ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m03"
	I1227 20:17:00.480835  337106 pod_ready.go:94] pod "kube-proxy-cg4z5" is "Ready"
	I1227 20:17:00.480892  337106 pod_ready.go:86] duration metric: took 391.133363ms for pod "kube-proxy-cg4z5" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:00.480907  337106 pod_ready.go:83] waiting for pod "kube-proxy-kscg6" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:00.676146  337106 request.go:683] "Waited before sending request" delay="195.116873ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kscg6"
	I1227 20:17:00.876152  337106 request.go:683] "Waited before sending request" delay="192.262917ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m04"
	I1227 20:17:00.881008  337106 pod_ready.go:94] pod "kube-proxy-kscg6" is "Ready"
	I1227 20:17:00.881038  337106 pod_ready.go:86] duration metric: took 400.122065ms for pod "kube-proxy-kscg6" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:00.881048  337106 pod_ready.go:83] waiting for pod "kube-proxy-mhmmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:01.076325  337106 request.go:683] "Waited before sending request" delay="195.195166ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mhmmn"
	I1227 20:17:01.276909  337106 request.go:683] "Waited before sending request" delay="195.293101ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549"
	I1227 20:17:01.280680  337106 pod_ready.go:94] pod "kube-proxy-mhmmn" is "Ready"
	I1227 20:17:01.280710  337106 pod_ready.go:86] duration metric: took 399.654071ms for pod "kube-proxy-mhmmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:01.280722  337106 pod_ready.go:83] waiting for pod "kube-proxy-nqr7h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:01.476964  337106 request.go:683] "Waited before sending request" delay="196.12986ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqr7h"
	I1227 20:17:01.676540  337106 request.go:683] "Waited before sending request" delay="192.49818ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m02"
	I1227 20:17:01.685668  337106 pod_ready.go:94] pod "kube-proxy-nqr7h" is "Ready"
	I1227 20:17:01.685702  337106 pod_ready.go:86] duration metric: took 404.972449ms for pod "kube-proxy-nqr7h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:01.876169  337106 request.go:683] "Waited before sending request" delay="190.319322ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1227 20:17:01.882184  337106 pod_ready.go:83] waiting for pod "kube-scheduler-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:02.076778  337106 request.go:683] "Waited before sending request" delay="194.39653ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-422549"
	I1227 20:17:02.277097  337106 request.go:683] "Waited before sending request" delay="189.264505ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549"
	I1227 20:17:02.281682  337106 pod_ready.go:94] pod "kube-scheduler-ha-422549" is "Ready"
	I1227 20:17:02.281718  337106 pod_ready.go:86] duration metric: took 399.422109ms for pod "kube-scheduler-ha-422549" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:02.281728  337106 pod_ready.go:83] waiting for pod "kube-scheduler-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:02.477021  337106 request.go:683] "Waited before sending request" delay="195.180295ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-422549-m02"
	I1227 20:17:02.676336  337106 request.go:683] "Waited before sending request" delay="193.224619ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m02"
	I1227 20:17:02.680037  337106 pod_ready.go:94] pod "kube-scheduler-ha-422549-m02" is "Ready"
	I1227 20:17:02.680112  337106 pod_ready.go:86] duration metric: took 398.375125ms for pod "kube-scheduler-ha-422549-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:02.680126  337106 pod_ready.go:83] waiting for pod "kube-scheduler-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:02.876405  337106 request.go:683] "Waited before sending request" delay="196.195019ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-422549-m03"
	I1227 20:17:03.076174  337106 request.go:683] "Waited before sending request" delay="195.233596ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-422549-m03"
	I1227 20:17:03.079768  337106 pod_ready.go:94] pod "kube-scheduler-ha-422549-m03" is "Ready"
	I1227 20:17:03.079800  337106 pod_ready.go:86] duration metric: took 399.666897ms for pod "kube-scheduler-ha-422549-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:17:03.079847  337106 pod_ready.go:40] duration metric: took 11.224018864s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:17:03.152145  337106 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 20:17:03.155161  337106 out.go:203] 
	W1227 20:17:03.158240  337106 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 20:17:03.161317  337106 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:17:03.164544  337106 out.go:179] * Done! kubectl is now configured to use "ha-422549" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:16:14 ha-422549 crio[669]: time="2025-12-27T20:16:14.963662144Z" level=info msg="Started container" PID=1165 containerID=e30e2fc201d45a408198fe1cf19728fccd5ebe17d0f5255f7589564c690889ec description=kube-system/kube-proxy-mhmmn/kube-proxy id=83f9017b-13c2-4c2b-927f-e22b6986096d name=/runtime.v1.RuntimeService/StartContainer sandboxID=6495c9a31e01c2f5ac17768f9f5e13a5423c5594fc2867804e3bb0a908221252
	Dec 27 20:16:45 ha-422549 conmon[1143]: conmon 7acd50dc5298fb99db44 <ninfo>: container 1152 exited with status 1
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.428315945Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f60cbd10-f7b2-4cd1-80a7-fccba0550911 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.43511179Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=994ad400-2597-4615-b648-cdef116922a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.438853907Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=52ecd850-72c2-4d8c-abb4-bcb68b155882 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.438953761Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.446454815Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.447683161Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/be0e461cabdcf17f5b8d1bb2222c3a204fd930be36abbb0859da36ab3d16462f/merged/etc/passwd: no such file or directory"
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.447776861Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/be0e461cabdcf17f5b8d1bb2222c3a204fd930be36abbb0859da36ab3d16462f/merged/etc/group: no such file or directory"
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.448117445Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.466884564Z" level=info msg="Created container 7361d14a41eae128627f7ec4143721dd6bb4d3ae719e332d08bda13887aca146: kube-system/storage-provisioner/storage-provisioner" id=52ecd850-72c2-4d8c-abb4-bcb68b155882 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.472967068Z" level=info msg="Starting container: 7361d14a41eae128627f7ec4143721dd6bb4d3ae719e332d08bda13887aca146" id=34f02e50-7595-4b71-82ea-dc48fe422b8c name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:16:45 ha-422549 crio[669]: time="2025-12-27T20:16:45.475650188Z" level=info msg="Started container" PID=1422 containerID=7361d14a41eae128627f7ec4143721dd6bb4d3ae719e332d08bda13887aca146 description=kube-system/storage-provisioner/storage-provisioner id=34f02e50-7595-4b71-82ea-dc48fe422b8c name=/runtime.v1.RuntimeService/StartContainer sandboxID=735879ad1c236176f8b5399b57a79b6c0ab6195af5a05ee38eac2aa69480249f
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.268998026Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.274112141Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.274149957Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.274171495Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.277419129Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.277535811Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.27759697Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.281296488Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.281332581Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.281356277Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.285112877Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:16:55 ha-422549 crio[669]: time="2025-12-27T20:16:55.28514943Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	7361d14a41eae       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Running             storage-provisioner       4                   735879ad1c236       storage-provisioner                 kube-system
	7879d1a6c6a98       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf   2 minutes ago        Running             coredns                   2                   bd06f2852a595       coredns-7d764666f9-mf5xw            kube-system
	0fb071b8bd6b6       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   2 minutes ago        Running             busybox                   2                   cf93f418a9a0a       busybox-769dd8b7dd-k7ks6            default
	7acd50dc5298f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   2 minutes ago        Exited              storage-provisioner       3                   735879ad1c236       storage-provisioner                 kube-system
	e30e2fc201d45       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   2 minutes ago        Running             kube-proxy                2                   6495c9a31e01c       kube-proxy-mhmmn                    kube-system
	595cf90732ea1       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf   2 minutes ago        Running             coredns                   2                   6e45d9e1ac155       coredns-7d764666f9-n5d9d            kube-system
	f4b4244b1db16       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   2 minutes ago        Running             kindnet-cni               2                   828118b404202       kindnet-qkqmv                       kube-system
	8a1b0b47a0ed1       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   2 minutes ago        Running             kube-controller-manager   7                   75a2af3dd93e9       kube-controller-manager-ha-422549   kube-system
	acdd287d4087f       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   2 minutes ago        Running             kube-scheduler            2                   ee19621eddf01       kube-scheduler-ha-422549            kube-system
	7c4ac1dbe59ad       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   2 minutes ago        Exited              kube-controller-manager   6                   75a2af3dd93e9       kube-controller-manager-ha-422549   kube-system
	6b0b91d1da0a4       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   2 minutes ago        Running             kube-apiserver            3                   025c49d6ec070       kube-apiserver-ha-422549            kube-system
	776b31832bd3b       28c5662932f6032ee4faba083d9c2af90232797e1d4f89d9892cb92b26fec299   2 minutes ago        Running             kube-vip                  1                   66af5fba1f89e       kube-vip-ha-422549                  kube-system
	97ce57129ce3b       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   2 minutes ago        Running             etcd                      2                   77b191af13e7e       etcd-ha-422549                      kube-system
	
	
	==> coredns [595cf90732ea108872ec4fb5764679f01619c8baa8a4aca8307dd9cb64a9120f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:35202 - 54427 "HINFO IN 8582221969168170305.1983723465531701443. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.038347152s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> coredns [7879d1a6c6a98b3b227de2b37ae12cd1a3492d804d3ec108fe982379de5ffd0c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:46822 - 1915 "HINFO IN 1020865313171851806.989409873494633985. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013088569s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-422549
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_03_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:03:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:18:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:17:57 +0000   Sat, 27 Dec 2025 20:03:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:17:57 +0000   Sat, 27 Dec 2025 20:03:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:17:57 +0000   Sat, 27 Dec 2025 20:03:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:17:57 +0000   Sat, 27 Dec 2025 20:09:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-422549
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                acd356f3-8732-454f-9ea5-4ebb90b80a04
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-k7ks6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7d764666f9-mf5xw             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 coredns-7d764666f9-n5d9d             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 etcd-ha-422549                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-qkqmv                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-422549             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-422549    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-mhmmn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-422549             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-422549                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  15m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  14m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  8m51s  node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  2m23s  node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  2m22s  node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  117s   node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	  Normal  RegisteredNode  56s    node-controller  Node ha-422549 event: Registered Node ha-422549 in Controller
	
	
	Name:               ha-422549-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_04_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:04:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:18:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:18:05 +0000   Sat, 27 Dec 2025 20:16:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:18:05 +0000   Sat, 27 Dec 2025 20:16:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:18:05 +0000   Sat, 27 Dec 2025 20:16:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:18:05 +0000   Sat, 27 Dec 2025 20:16:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-422549-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                279e934d-6d34-4a11-83f0-a7f36011d6a2
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-v6vks                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-422549-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-5wczs                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-422549-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-422549-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-nqr7h                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-422549-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-422549-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  14m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  14m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  8m51s  node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  NodeNotReady    8m1s   node-controller  Node ha-422549-m02 status is now: NodeNotReady
	  Normal  RegisteredNode  2m23s  node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  2m22s  node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  117s   node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	  Normal  RegisteredNode  56s    node-controller  Node ha-422549-m02 event: Registered Node ha-422549-m02 in Controller
	
	
	Name:               ha-422549-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_04_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:04:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:18:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:16:41 +0000   Sat, 27 Dec 2025 20:16:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:16:41 +0000   Sat, 27 Dec 2025 20:16:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:16:41 +0000   Sat, 27 Dec 2025 20:16:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:16:41 +0000   Sat, 27 Dec 2025 20:16:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-422549-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                dd826b6d-21ec-45c4-b392-2d4b9b2daddb
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-769dd8b7dd-qcz4b                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-422549-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-28svl                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-422549-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-422549-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-cg4z5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-422549-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-422549-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  8m51s  node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  NodeNotReady    8m1s   node-controller  Node ha-422549-m03 status is now: NodeNotReady
	  Normal  RegisteredNode  2m23s  node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  2m22s  node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  117s   node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	  Normal  RegisteredNode  56s    node-controller  Node ha-422549-m03 event: Registered Node ha-422549-m03 in Controller
	
	
	Name:               ha-422549-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_05_33_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:05:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:18:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:18:33 +0000   Sat, 27 Dec 2025 20:16:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:18:33 +0000   Sat, 27 Dec 2025 20:16:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:18:33 +0000   Sat, 27 Dec 2025 20:16:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:18:33 +0000   Sat, 27 Dec 2025 20:16:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-422549-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                45c0e480-898e-46d5-83ce-c457d7b4b021
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4hl7v       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-proxy-kscg6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  13m    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  8m51s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  NodeNotReady    8m1s   node-controller  Node ha-422549-m04 status is now: NodeNotReady
	  Normal  RegisteredNode  2m23s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  2m22s  node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  117s   node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	  Normal  RegisteredNode  56s    node-controller  Node ha-422549-m04 event: Registered Node ha-422549-m04 in Controller
	
	
	Name:               ha-422549-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-422549-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=ha-422549
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_27T20_17_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:17:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-422549-m05
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:18:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:18:32 +0000   Sat, 27 Dec 2025 20:17:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:18:32 +0000   Sat, 27 Dec 2025 20:17:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:18:32 +0000   Sat, 27 Dec 2025 20:17:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:18:32 +0000   Sat, 27 Dec 2025 20:18:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-422549-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                c1c7de59-aebb-4531-b34d-d2fd7fb1d4ab
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-422549-m05                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         52s
	  kube-system                 kindnet-8jzbd                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-ha-422549-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-controller-manager-ha-422549-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-proxy-5dh85                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-ha-422549-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-vip-ha-422549-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  52s   node-controller  Node ha-422549-m05 event: Registered Node ha-422549-m05 in Controller
	  Normal  RegisteredNode  52s   node-controller  Node ha-422549-m05 event: Registered Node ha-422549-m05 in Controller
	  Normal  RegisteredNode  52s   node-controller  Node ha-422549-m05 event: Registered Node ha-422549-m05 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node ha-422549-m05 event: Registered Node ha-422549-m05 in Controller
	
	
	==> dmesg <==
	[Dec27 19:28] overlayfs: idmapped layers are currently not supported
	[ +28.388596] overlayfs: idmapped layers are currently not supported
	[Dec27 19:29] overlayfs: idmapped layers are currently not supported
	[  +9.242530] overlayfs: idmapped layers are currently not supported
	[Dec27 19:30] overlayfs: idmapped layers are currently not supported
	[ +11.577339] overlayfs: idmapped layers are currently not supported
	[Dec27 19:32] overlayfs: idmapped layers are currently not supported
	[ +19.186532] overlayfs: idmapped layers are currently not supported
	[Dec27 19:34] overlayfs: idmapped layers are currently not supported
	[Dec27 19:54] kauditd_printk_skb: 8 callbacks suppressed
	[Dec27 19:56] overlayfs: idmapped layers are currently not supported
	[Dec27 19:59] overlayfs: idmapped layers are currently not supported
	[Dec27 20:00] overlayfs: idmapped layers are currently not supported
	[Dec27 20:03] overlayfs: idmapped layers are currently not supported
	[ +31.019083] overlayfs: idmapped layers are currently not supported
	[Dec27 20:04] overlayfs: idmapped layers are currently not supported
	[Dec27 20:05] overlayfs: idmapped layers are currently not supported
	[Dec27 20:06] overlayfs: idmapped layers are currently not supported
	[Dec27 20:07] overlayfs: idmapped layers are currently not supported
	[  +3.687478] overlayfs: idmapped layers are currently not supported
	[Dec27 20:15] overlayfs: idmapped layers are currently not supported
	[  +3.163851] overlayfs: idmapped layers are currently not supported
	[Dec27 20:16] overlayfs: idmapped layers are currently not supported
	[ +35.129102] overlayfs: idmapped layers are currently not supported
	[Dec27 20:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [97ce57129ce3bc803fd62d49e1f3f06d06aa64d93e2ef36f372084cbbd21e34a] <==
	{"level":"warn","ts":"2025-12-27T20:17:33.067093Z","caller":"embed/config_logging.go:194","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.6:58178","server-name":"","error":"read tcp 192.168.49.2:2379->192.168.49.6:58178: read: connection reset by peer"}
	{"level":"info","ts":"2025-12-27T20:17:33.070109Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(129412930287384796 10978419992923766050 12593026477526642892 13372017479021783969)"}
	{"level":"info","ts":"2025-12-27T20:17:33.070273Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"985b2ed141447d22"}
	{"level":"info","ts":"2025-12-27T20:17:33.070324Z","caller":"etcdserver/server.go:1768","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"985b2ed141447d22"}
	{"level":"warn","ts":"2025-12-27T20:17:33.073103Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"985b2ed141447d22","error":"EOF"}
	{"level":"info","ts":"2025-12-27T20:17:33.163842Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"985b2ed141447d22"}
	{"level":"info","ts":"2025-12-27T20:17:33.171972Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"985b2ed141447d22"}
	{"level":"warn","ts":"2025-12-27T20:17:33.270184Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"985b2ed141447d22","error":"failed to write 985b2ed141447d22 on stream Message (write tcp 192.168.49.2:2380->192.168.49.6:33198: write: broken pipe)"}
	{"level":"warn","ts":"2025-12-27T20:17:33.270416Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"985b2ed141447d22"}
	{"level":"info","ts":"2025-12-27T20:17:33.291423Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"985b2ed141447d22"}
	{"level":"info","ts":"2025-12-27T20:17:33.309344Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"985b2ed141447d22","stream-type":"stream Message"}
	{"level":"info","ts":"2025-12-27T20:17:33.309401Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"985b2ed141447d22"}
	{"level":"info","ts":"2025-12-27T20:17:33.310443Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"985b2ed141447d22","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-12-27T20:17:33.310487Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"985b2ed141447d22"}
	{"level":"info","ts":"2025-12-27T20:17:33.310499Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"985b2ed141447d22"}
	{"level":"info","ts":"2025-12-27T20:17:46.488500Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-27T20:17:46.565766Z","caller":"traceutil/trace.go:172","msg":"trace[933768759] transaction","detail":"{read_only:false; response_revision:3112; number_of_response:1; }","duration":"118.199627ms","start":"2025-12-27T20:17:46.447555Z","end":"2025-12-27T20:17:46.565755Z","steps":["trace[933768759] 'process raft request'  (duration: 93.692633ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:17:46.565969Z","caller":"traceutil/trace.go:172","msg":"trace[2018106619] transaction","detail":"{read_only:false; number_of_response:1; response_revision:3113; }","duration":"103.296339ms","start":"2025-12-27T20:17:46.462666Z","end":"2025-12-27T20:17:46.565962Z","steps":["trace[2018106619] 'process raft request'  (duration: 90.454834ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:17:46.566053Z","caller":"traceutil/trace.go:172","msg":"trace[448559542] transaction","detail":"{read_only:false; number_of_response:1; response_revision:3113; }","duration":"103.309648ms","start":"2025-12-27T20:17:46.462738Z","end":"2025-12-27T20:17:46.566048Z","steps":["trace[448559542] 'process raft request'  (duration: 90.419996ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:17:46.566891Z","caller":"traceutil/trace.go:172","msg":"trace[957148971] transaction","detail":"{read_only:false; number_of_response:1; response_revision:3113; }","duration":"102.737799ms","start":"2025-12-27T20:17:46.462774Z","end":"2025-12-27T20:17:46.565512Z","steps":["trace[957148971] 'process raft request'  (duration: 90.850047ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-27T20:17:46.641888Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"116.168202ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-jq272\" limit:1 ","response":"range_response_count:1 size:3431"}
	{"level":"info","ts":"2025-12-27T20:17:46.643075Z","caller":"traceutil/trace.go:172","msg":"trace[892669865] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-jq272; range_end:; response_count:1; response_revision:3119; }","duration":"117.360857ms","start":"2025-12-27T20:17:46.525697Z","end":"2025-12-27T20:17:46.643058Z","steps":["trace[892669865] 'agreement among raft nodes before linearized reading'  (duration: 115.484825ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-27T20:17:47.012232Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-27T20:17:51.433034Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-27T20:18:02.986607Z","caller":"etcdserver/server.go:1872","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"985b2ed141447d22","bytes":6293382,"size":"6.3 MB","took":"30.716112113s"}
	
	
	==> kernel <==
	 20:18:40 up  2:01,  0 user,  load average: 2.84, 1.63, 1.49
	Linux ha-422549 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f4b4244b1db16ca451154424e89d4d56ce2b826c6f69b1c1fa82f892e7966881] <==
	I1227 20:18:15.275525       1 main.go:301] handling current node
	I1227 20:18:15.275560       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1227 20:18:15.275594       1 main.go:324] Node ha-422549-m02 has CIDR [10.244.1.0/24] 
	I1227 20:18:15.284253       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1227 20:18:15.284355       1 main.go:324] Node ha-422549-m03 has CIDR [10.244.2.0/24] 
	I1227 20:18:25.274554       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 20:18:25.274667       1 main.go:301] handling current node
	I1227 20:18:25.274691       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1227 20:18:25.274706       1 main.go:324] Node ha-422549-m02 has CIDR [10.244.1.0/24] 
	I1227 20:18:25.274874       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1227 20:18:25.274887       1 main.go:324] Node ha-422549-m03 has CIDR [10.244.2.0/24] 
	I1227 20:18:25.274964       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1227 20:18:25.274976       1 main.go:324] Node ha-422549-m04 has CIDR [10.244.3.0/24] 
	I1227 20:18:25.275034       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1227 20:18:25.275045       1 main.go:324] Node ha-422549-m05 has CIDR [10.244.4.0/24] 
	I1227 20:18:35.267887       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1227 20:18:35.267924       1 main.go:324] Node ha-422549-m03 has CIDR [10.244.2.0/24] 
	I1227 20:18:35.268162       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1227 20:18:35.268176       1 main.go:324] Node ha-422549-m04 has CIDR [10.244.3.0/24] 
	I1227 20:18:35.268262       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1227 20:18:35.268281       1 main.go:324] Node ha-422549-m05 has CIDR [10.244.4.0/24] 
	I1227 20:18:35.268360       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 20:18:35.268373       1 main.go:301] handling current node
	I1227 20:18:35.268386       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1227 20:18:35.268391       1 main.go:324] Node ha-422549-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [6b0b91d1da0a4c385d0d3110ebc1d18efbc54bab7d6da6bba31c072f2fbd4da9] <==
	I1227 20:16:13.796413       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:13.797072       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 20:16:13.797074       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 20:16:13.797100       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:13.797777       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 20:16:13.797963       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 20:16:13.798046       1 aggregator.go:187] initial CRD sync complete...
	I1227 20:16:13.798090       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 20:16:13.798127       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:16:13.798158       1 cache.go:39] Caches are synced for autoregister controller
	E1227 20:16:13.804997       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 20:16:13.818967       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:13.818980       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 20:16:13.819043       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:16:13.824892       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:16:13.829882       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:16:13.856520       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:16:13.903885       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:16:14.353399       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1227 20:16:16.144077       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1227 20:16:16.145490       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:16:16.162091       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:16:17.856302       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:16:18.028352       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 20:16:18.100041       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [7c4ac1dbe59ad7d3143dfe74886a6bc3058bfad37ae864b855a6e47c1a4d984e] <==
	I1227 20:15:51.302678       1 serving.go:386] Generated self-signed cert in-memory
	I1227 20:15:51.319186       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1227 20:15:51.319285       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:15:51.320999       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1227 20:15:51.321146       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1227 20:15:51.321625       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1227 20:15:51.321698       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1227 20:16:13.577648       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [8a1b0b47a0ed1caecc63a10c0f1f9666bd9ee325c50ecf1f6c7e085c9598dbfa] <==
	I1227 20:16:17.634716       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.634834       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.634959       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.635096       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.635317       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.635492       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.635766       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.656398       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.659067       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.751050       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m02"
	I1227 20:16:17.752259       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m03"
	I1227 20:16:17.752315       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m04"
	I1227 20:16:17.752343       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549"
	I1227 20:16:17.820816       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.820838       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:16:17.820843       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:16:17.829110       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:17.887401       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 20:16:51.537342       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-422549-m04"
	E1227 20:17:45.611567       1 certificate_controller.go:158] "Unhandled Error" err="Sync csr-dwmks failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-dwmks\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1227 20:17:46.276633       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-422549-m05\" does not exist"
	I1227 20:17:46.277659       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-422549-m04"
	I1227 20:17:46.316698       1 range_allocator.go:433] "Set node PodCIDR" node="ha-422549-m05" podCIDRs=["10.244.4.0/24"]
	I1227 20:17:48.058474       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-422549-m05"
	I1227 20:18:32.783820       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-422549-m04"
	
	
	==> kube-proxy [e30e2fc201d45a408198fe1cf19728fccd5ebe17d0f5255f7589564c690889ec] <==
	I1227 20:16:15.717666       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:16:16.119519       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:16:16.241830       1 shared_informer.go:377] "Caches are synced"
	I1227 20:16:16.241930       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1227 20:16:16.242046       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:16:16.278310       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:16:16.278410       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:16:16.293265       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:16:16.293750       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:16:16.293812       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:16:16.298528       1 config.go:200] "Starting service config controller"
	I1227 20:16:16.298607       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:16:16.298663       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:16:16.298690       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:16:16.302047       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:16:16.303313       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:16:16.304201       1 config.go:309] "Starting node config controller"
	I1227 20:16:16.304276       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:16:16.304307       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:16:16.399041       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:16:16.402314       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 20:16:16.412735       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [acdd287d4087fec2c7c00eb589c13b06231128c1441e2db4a8f74c57600a6e67] <==
	E1227 20:17:46.524702       1 framework.go:1544] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5dh85\": pod kube-proxy-5dh85 is already assigned to node \"ha-422549-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5dh85" node="ha-422549-m05"
	E1227 20:17:46.524974       1 schedule_one.go:370] "scheduler cache ForgetPod failed" err="pod 9f3f6c7d-38b1-4845-bb80-86214ed404f5(kube-system/kube-proxy-5dh85) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-5dh85"
	E1227 20:17:46.525768       1 schedule_one.go:1068] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-n8tp2\": pod kindnet-n8tp2 is already assigned to node \"ha-422549-m05\"" logger="UnhandledError" pod="kube-system/kindnet-n8tp2"
	I1227 20:17:46.525898       1 schedule_one.go:1081] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-jq272" node="ha-422549-m05"
	I1227 20:17:46.526049       1 schedule_one.go:1081] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-n8tp2" node="ha-422549-m05"
	E1227 20:17:46.525974       1 schedule_one.go:1068] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5dh85\": pod kube-proxy-5dh85 is already assigned to node \"ha-422549-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-5dh85"
	I1227 20:17:46.532106       1 schedule_one.go:1081] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5dh85" node="ha-422549-m05"
	E1227 20:17:46.586195       1 framework.go:1544] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zkvfs\": pod kindnet-zkvfs is already assigned to node \"ha-422549-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-zkvfs" node="ha-422549-m05"
	E1227 20:17:46.586352       1 schedule_one.go:1068] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zkvfs\": pod kindnet-zkvfs is already assigned to node \"ha-422549-m05\"" logger="UnhandledError" pod="kube-system/kindnet-zkvfs"
	E1227 20:17:46.586524       1 framework.go:1544] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7bg75\": pod kube-proxy-7bg75 is already assigned to node \"ha-422549-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7bg75" node="ha-422549-m05"
	E1227 20:17:46.588267       1 schedule_one.go:370] "scheduler cache ForgetPod failed" err="pod 26aa7532-a8d9-4383-b3c1-de0f94f67bbb(kube-system/kube-proxy-7bg75) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-7bg75"
	E1227 20:17:46.588363       1 schedule_one.go:1068] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7bg75\": pod kube-proxy-7bg75 is already assigned to node \"ha-422549-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-7bg75"
	E1227 20:17:46.588587       1 framework.go:1544] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vqzqr\": pod kindnet-vqzqr is already assigned to node \"ha-422549-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-vqzqr" node="ha-422549-m05"
	E1227 20:17:46.588647       1 schedule_one.go:370] "scheduler cache ForgetPod failed" err="pod d0346179-7b1e-48ee-b3fc-4192653b696b(kube-system/kindnet-vqzqr) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-vqzqr"
	E1227 20:17:46.591374       1 schedule_one.go:1068] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vqzqr\": pod kindnet-vqzqr is already assigned to node \"ha-422549-m05\"" logger="UnhandledError" pod="kube-system/kindnet-vqzqr"
	I1227 20:17:46.591476       1 schedule_one.go:1081] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-vqzqr" node="ha-422549-m05"
	I1227 20:17:46.591715       1 schedule_one.go:1081] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-7bg75" node="ha-422549-m05"
	E1227 20:17:47.054055       1 framework.go:1544] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-p6z5l\": pod kube-proxy-p6z5l is already assigned to node \"ha-422549-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-p6z5l" node="ha-422549-m05"
	E1227 20:17:47.081150       1 schedule_one.go:370] "scheduler cache ForgetPod failed" err="pod c136964d-e5da-458b-8f5e-451b33988bab(kube-system/kube-proxy-p6z5l) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-p6z5l"
	E1227 20:17:47.081267       1 schedule_one.go:1068] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-p6z5l\": pod kube-proxy-p6z5l is already assigned to node \"ha-422549-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-p6z5l"
	E1227 20:17:47.054341       1 framework.go:1544] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8jzbd\": pod kindnet-8jzbd is already assigned to node \"ha-422549-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-8jzbd" node="ha-422549-m05"
	E1227 20:17:47.081381       1 schedule_one.go:370] "scheduler cache ForgetPod failed" err="pod 7b14ad85-d98b-47dc-bcfc-96d2202ac94e(kube-system/kindnet-8jzbd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-8jzbd"
	E1227 20:17:47.082504       1 schedule_one.go:1068] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8jzbd\": pod kindnet-8jzbd is already assigned to node \"ha-422549-m05\"" logger="UnhandledError" pod="kube-system/kindnet-8jzbd"
	I1227 20:17:47.082655       1 schedule_one.go:1081] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8jzbd" node="ha-422549-m05"
	I1227 20:17:47.082609       1 schedule_one.go:1081] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-p6z5l" node="ha-422549-m05"
	
	
	==> kubelet <==
	Dec 27 20:16:15 ha-422549 kubelet[804]: E1227 20:16:15.322797     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:16:15 ha-422549 kubelet[804]: E1227 20:16:15.333130     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:16:15 ha-422549 kubelet[804]: E1227 20:16:15.350577     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:16:15 ha-422549 kubelet[804]: I1227 20:16:15.550682     804 kubelet_node_status.go:74] "Attempting to register node" node="ha-422549"
	Dec 27 20:16:15 ha-422549 kubelet[804]: I1227 20:16:15.614938     804 kubelet_node_status.go:123] "Node was previously registered" node="ha-422549"
	Dec 27 20:16:15 ha-422549 kubelet[804]: I1227 20:16:15.615228     804 kubelet_node_status.go:77] "Successfully registered node" node="ha-422549"
	Dec 27 20:16:15 ha-422549 kubelet[804]: I1227 20:16:15.615315     804 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 27 20:16:15 ha-422549 kubelet[804]: I1227 20:16:15.616294     804 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 27 20:16:16 ha-422549 kubelet[804]: E1227 20:16:16.196898     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-ha-422549" containerName="kube-scheduler"
	Dec 27 20:16:16 ha-422549 kubelet[804]: E1227 20:16:16.353325     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:16:16 ha-422549 kubelet[804]: E1227 20:16:16.354607     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:16:20 ha-422549 kubelet[804]: E1227 20:16:20.687129     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:16:21 ha-422549 kubelet[804]: E1227 20:16:21.706076     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	Dec 27 20:16:22 ha-422549 kubelet[804]: E1227 20:16:22.368737     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	Dec 27 20:16:30 ha-422549 kubelet[804]: E1227 20:16:30.696140     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:16:45 ha-422549 kubelet[804]: I1227 20:16:45.426555     804 scope.go:122] "RemoveContainer" containerID="7acd50dc5298fb99db44502b466c9e34b79ddce5613479143c4c5834f09f1731"
	Dec 27 20:16:56 ha-422549 kubelet[804]: E1227 20:16:56.356173     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:16:56 ha-422549 kubelet[804]: E1227 20:16:56.356735     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:17:26 ha-422549 kubelet[804]: E1227 20:17:26.167299     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-422549" containerName="kube-apiserver"
	Dec 27 20:17:32 ha-422549 kubelet[804]: E1227 20:17:32.167238     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-ha-422549" containerName="etcd"
	Dec 27 20:17:44 ha-422549 kubelet[804]: E1227 20:17:44.166617     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-ha-422549" containerName="kube-controller-manager"
	Dec 27 20:17:45 ha-422549 kubelet[804]: E1227 20:17:45.167196     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-ha-422549" containerName="kube-scheduler"
	Dec 27 20:18:14 ha-422549 kubelet[804]: E1227 20:18:14.167080     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-mf5xw" containerName="coredns"
	Dec 27 20:18:21 ha-422549 kubelet[804]: E1227 20:18:21.167434     804 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-n5d9d" containerName="coredns"
	Dec 27 20:18:33 ha-422549 kubelet[804]: E1227 20:18:33.167469     804 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-ha-422549" containerName="etcd"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-422549 -n ha-422549
helpers_test.go:270: (dbg) Run:  kubectl --context ha-422549 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.31s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.43s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-809975 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-809975 --output=json --user=testUser: exit status 80 (2.428118468s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"846aa01f-a6e0-4861-81fc-98d250d26c5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-809975 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"f40c637d-7e61-4920-b345-d359b579d939","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-27T20:19:42Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"c9956a68-bf5d-48ec-a2bc-2da860468ccf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-809975 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.99s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-809975 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-809975 --output=json --user=testUser: exit status 80 (1.985295092s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"19f06203-3ab3-4346-87d9-555cc084e300","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-809975 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"80519d8f-33a5-4c33-bb9b-80ae5ad7abfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-27T20:19:44Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"1fa8927b-0888-4aaa-b833-4fad2fe1b966","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-809975 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.99s)

                                                
                                    
x
+
TestPause/serial/Pause (9.56s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-063268 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-063268 --alsologtostderr -v=5: exit status 80 (2.718334774s)

                                                
                                                
-- stdout --
	* Pausing node pause-063268 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:32:42.882790  416936 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:32:42.883262  416936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:32:42.883321  416936 out.go:374] Setting ErrFile to fd 2...
	I1227 20:32:42.883340  416936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:32:42.883656  416936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:32:42.883960  416936 out.go:368] Setting JSON to false
	I1227 20:32:42.884024  416936 mustload.go:66] Loading cluster: pause-063268
	I1227 20:32:42.884495  416936 config.go:182] Loaded profile config "pause-063268": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:32:42.885027  416936 cli_runner.go:164] Run: docker container inspect pause-063268 --format={{.State.Status}}
	I1227 20:32:42.902240  416936 host.go:66] Checking if "pause-063268" exists ...
	I1227 20:32:42.902582  416936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:32:42.956877  416936 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 20:32:42.947604659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:32:42.957573  416936 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22332/minikube-v1.37.0-1766811082-22332-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766811082-22332/minikube-v1.37.0-1766811082-22332-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766811082-22332-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:pause-063268 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true
) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 20:32:42.961071  416936 out.go:179] * Pausing node pause-063268 ... 
	I1227 20:32:42.964775  416936 host.go:66] Checking if "pause-063268" exists ...
	I1227 20:32:42.965087  416936 ssh_runner.go:195] Run: systemctl --version
	I1227 20:32:42.965147  416936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-063268
	I1227 20:32:42.982202  416936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/pause-063268/id_rsa Username:docker}
	I1227 20:32:43.079881  416936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:32:43.095632  416936 pause.go:52] kubelet running: true
	I1227 20:32:43.095708  416936 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:32:43.304592  416936 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:32:43.304735  416936 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:32:43.368008  416936 cri.go:96] found id: "3dcb1acad15eb773b8e4d1bf48eff17bd636db39f14d8b4c2114369015e50c99"
	I1227 20:32:43.368084  416936 cri.go:96] found id: "46206dca1d54589a2dc7e056f6533461fa94f3ac29b80ee9ef2c43ecd0f219da"
	I1227 20:32:43.368102  416936 cri.go:96] found id: "2d5187c09e0d2282d9b217df137e91d2bde4c8df84c181dfc67dd7d051c343d1"
	I1227 20:32:43.368125  416936 cri.go:96] found id: "3abedd3ca1b6781d775413bebb693eb5e728039da24f46af35b97da37204edf6"
	I1227 20:32:43.368164  416936 cri.go:96] found id: "fa108279b1eb9ceb70588f2233e4945ac93e186f6bcf3c4ee7c054449cf0f75b"
	I1227 20:32:43.368189  416936 cri.go:96] found id: "8cac825e5dc8b16adf125824bbe9d9e4396548dcbe927b2c4b11f08cff8dfa4d"
	I1227 20:32:43.368209  416936 cri.go:96] found id: "7d35bd38f16e24300670b1d7b0a1bb5c51e54ae70ff336a27ea418629779fd43"
	I1227 20:32:43.368257  416936 cri.go:96] found id: "416558ce12be971f6a4b6e7ef366b6cd5749f709d73875a41b18c0ee365732fb"
	I1227 20:32:43.368282  416936 cri.go:96] found id: "b6029d6a282d2e2b9baf6745700225b920389457d9eaed47d3219d6a1698087b"
	I1227 20:32:43.368330  416936 cri.go:96] found id: "dc98608038347ea5a66daf1ae3446ca17649266730a5d13c8aa1465c8a6f3124"
	I1227 20:32:43.368353  416936 cri.go:96] found id: "9777b2b15db10899b07ba1594d186322030830e3fdb8ddbd2a0f20737d3d28c8"
	I1227 20:32:43.368369  416936 cri.go:96] found id: "d3397c067b48bfc38eea1e3595441ae62853459c7ca60afd179fd9d4a21ac34d"
	I1227 20:32:43.368400  416936 cri.go:96] found id: "9e51e3ee441e237964c680e42a957dc502b056f13d851926ff327daf2de4f7d4"
	I1227 20:32:43.368421  416936 cri.go:96] found id: "768d0620e2839f7eab59ee37f96feaa3ecf0b3a65150b795226874a68029be62"
	I1227 20:32:43.368437  416936 cri.go:96] found id: ""
	I1227 20:32:43.368522  416936 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:32:43.379333  416936 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:32:43Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:32:43.616640  416936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:32:43.633229  416936 pause.go:52] kubelet running: false
	I1227 20:32:43.633298  416936 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:32:43.813342  416936 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:32:43.813438  416936 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:32:43.874484  416936 cri.go:96] found id: "3dcb1acad15eb773b8e4d1bf48eff17bd636db39f14d8b4c2114369015e50c99"
	I1227 20:32:43.874505  416936 cri.go:96] found id: "46206dca1d54589a2dc7e056f6533461fa94f3ac29b80ee9ef2c43ecd0f219da"
	I1227 20:32:43.874510  416936 cri.go:96] found id: "2d5187c09e0d2282d9b217df137e91d2bde4c8df84c181dfc67dd7d051c343d1"
	I1227 20:32:43.874514  416936 cri.go:96] found id: "3abedd3ca1b6781d775413bebb693eb5e728039da24f46af35b97da37204edf6"
	I1227 20:32:43.874517  416936 cri.go:96] found id: "fa108279b1eb9ceb70588f2233e4945ac93e186f6bcf3c4ee7c054449cf0f75b"
	I1227 20:32:43.874521  416936 cri.go:96] found id: "8cac825e5dc8b16adf125824bbe9d9e4396548dcbe927b2c4b11f08cff8dfa4d"
	I1227 20:32:43.874524  416936 cri.go:96] found id: "7d35bd38f16e24300670b1d7b0a1bb5c51e54ae70ff336a27ea418629779fd43"
	I1227 20:32:43.874527  416936 cri.go:96] found id: "416558ce12be971f6a4b6e7ef366b6cd5749f709d73875a41b18c0ee365732fb"
	I1227 20:32:43.874530  416936 cri.go:96] found id: "b6029d6a282d2e2b9baf6745700225b920389457d9eaed47d3219d6a1698087b"
	I1227 20:32:43.874536  416936 cri.go:96] found id: "dc98608038347ea5a66daf1ae3446ca17649266730a5d13c8aa1465c8a6f3124"
	I1227 20:32:43.874539  416936 cri.go:96] found id: "9777b2b15db10899b07ba1594d186322030830e3fdb8ddbd2a0f20737d3d28c8"
	I1227 20:32:43.874542  416936 cri.go:96] found id: "d3397c067b48bfc38eea1e3595441ae62853459c7ca60afd179fd9d4a21ac34d"
	I1227 20:32:43.874545  416936 cri.go:96] found id: "9e51e3ee441e237964c680e42a957dc502b056f13d851926ff327daf2de4f7d4"
	I1227 20:32:43.874548  416936 cri.go:96] found id: "768d0620e2839f7eab59ee37f96feaa3ecf0b3a65150b795226874a68029be62"
	I1227 20:32:43.874551  416936 cri.go:96] found id: ""
	I1227 20:32:43.874602  416936 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:32:44.430500  416936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:32:44.443082  416936 pause.go:52] kubelet running: false
	I1227 20:32:44.443154  416936 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:32:44.588315  416936 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:32:44.588405  416936 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:32:44.660321  416936 cri.go:96] found id: "3dcb1acad15eb773b8e4d1bf48eff17bd636db39f14d8b4c2114369015e50c99"
	I1227 20:32:44.660356  416936 cri.go:96] found id: "46206dca1d54589a2dc7e056f6533461fa94f3ac29b80ee9ef2c43ecd0f219da"
	I1227 20:32:44.660363  416936 cri.go:96] found id: "2d5187c09e0d2282d9b217df137e91d2bde4c8df84c181dfc67dd7d051c343d1"
	I1227 20:32:44.660366  416936 cri.go:96] found id: "3abedd3ca1b6781d775413bebb693eb5e728039da24f46af35b97da37204edf6"
	I1227 20:32:44.660370  416936 cri.go:96] found id: "fa108279b1eb9ceb70588f2233e4945ac93e186f6bcf3c4ee7c054449cf0f75b"
	I1227 20:32:44.660373  416936 cri.go:96] found id: "8cac825e5dc8b16adf125824bbe9d9e4396548dcbe927b2c4b11f08cff8dfa4d"
	I1227 20:32:44.660376  416936 cri.go:96] found id: "7d35bd38f16e24300670b1d7b0a1bb5c51e54ae70ff336a27ea418629779fd43"
	I1227 20:32:44.660379  416936 cri.go:96] found id: "416558ce12be971f6a4b6e7ef366b6cd5749f709d73875a41b18c0ee365732fb"
	I1227 20:32:44.660382  416936 cri.go:96] found id: "b6029d6a282d2e2b9baf6745700225b920389457d9eaed47d3219d6a1698087b"
	I1227 20:32:44.660390  416936 cri.go:96] found id: "dc98608038347ea5a66daf1ae3446ca17649266730a5d13c8aa1465c8a6f3124"
	I1227 20:32:44.660393  416936 cri.go:96] found id: "9777b2b15db10899b07ba1594d186322030830e3fdb8ddbd2a0f20737d3d28c8"
	I1227 20:32:44.660396  416936 cri.go:96] found id: "d3397c067b48bfc38eea1e3595441ae62853459c7ca60afd179fd9d4a21ac34d"
	I1227 20:32:44.660399  416936 cri.go:96] found id: "9e51e3ee441e237964c680e42a957dc502b056f13d851926ff327daf2de4f7d4"
	I1227 20:32:44.660401  416936 cri.go:96] found id: "768d0620e2839f7eab59ee37f96feaa3ecf0b3a65150b795226874a68029be62"
	I1227 20:32:44.660406  416936 cri.go:96] found id: ""
	I1227 20:32:44.660454  416936 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:32:45.071524  416936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:32:45.091925  416936 pause.go:52] kubelet running: false
	I1227 20:32:45.092021  416936 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:32:45.395915  416936 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:32:45.396017  416936 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:32:45.505787  416936 cri.go:96] found id: "3dcb1acad15eb773b8e4d1bf48eff17bd636db39f14d8b4c2114369015e50c99"
	I1227 20:32:45.505817  416936 cri.go:96] found id: "46206dca1d54589a2dc7e056f6533461fa94f3ac29b80ee9ef2c43ecd0f219da"
	I1227 20:32:45.505822  416936 cri.go:96] found id: "2d5187c09e0d2282d9b217df137e91d2bde4c8df84c181dfc67dd7d051c343d1"
	I1227 20:32:45.505825  416936 cri.go:96] found id: "3abedd3ca1b6781d775413bebb693eb5e728039da24f46af35b97da37204edf6"
	I1227 20:32:45.505829  416936 cri.go:96] found id: "fa108279b1eb9ceb70588f2233e4945ac93e186f6bcf3c4ee7c054449cf0f75b"
	I1227 20:32:45.505833  416936 cri.go:96] found id: "8cac825e5dc8b16adf125824bbe9d9e4396548dcbe927b2c4b11f08cff8dfa4d"
	I1227 20:32:45.505836  416936 cri.go:96] found id: "7d35bd38f16e24300670b1d7b0a1bb5c51e54ae70ff336a27ea418629779fd43"
	I1227 20:32:45.505839  416936 cri.go:96] found id: "416558ce12be971f6a4b6e7ef366b6cd5749f709d73875a41b18c0ee365732fb"
	I1227 20:32:45.505843  416936 cri.go:96] found id: "b6029d6a282d2e2b9baf6745700225b920389457d9eaed47d3219d6a1698087b"
	I1227 20:32:45.505850  416936 cri.go:96] found id: "dc98608038347ea5a66daf1ae3446ca17649266730a5d13c8aa1465c8a6f3124"
	I1227 20:32:45.505853  416936 cri.go:96] found id: "9777b2b15db10899b07ba1594d186322030830e3fdb8ddbd2a0f20737d3d28c8"
	I1227 20:32:45.505856  416936 cri.go:96] found id: "d3397c067b48bfc38eea1e3595441ae62853459c7ca60afd179fd9d4a21ac34d"
	I1227 20:32:45.505859  416936 cri.go:96] found id: "9e51e3ee441e237964c680e42a957dc502b056f13d851926ff327daf2de4f7d4"
	I1227 20:32:45.505862  416936 cri.go:96] found id: "768d0620e2839f7eab59ee37f96feaa3ecf0b3a65150b795226874a68029be62"
	I1227 20:32:45.505873  416936 cri.go:96] found id: ""
	I1227 20:32:45.505926  416936 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:32:45.520982  416936 out.go:203] 
	W1227 20:32:45.524082  416936 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:32:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:32:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 20:32:45.524108  416936 out.go:285] * 
	* 
	W1227 20:32:45.527755  416936 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:32:45.530983  416936 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-063268 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-063268
helpers_test.go:244: (dbg) docker inspect pause-063268:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "854628bf54d21e1d1c2b7cb5018f445262206bc2818e9cf4ce7c2edd50bbb7b8",
	        "Created": "2025-12-27T20:31:31.611853285Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 411170,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:31:32.499349396Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/854628bf54d21e1d1c2b7cb5018f445262206bc2818e9cf4ce7c2edd50bbb7b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/854628bf54d21e1d1c2b7cb5018f445262206bc2818e9cf4ce7c2edd50bbb7b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/854628bf54d21e1d1c2b7cb5018f445262206bc2818e9cf4ce7c2edd50bbb7b8/hosts",
	        "LogPath": "/var/lib/docker/containers/854628bf54d21e1d1c2b7cb5018f445262206bc2818e9cf4ce7c2edd50bbb7b8/854628bf54d21e1d1c2b7cb5018f445262206bc2818e9cf4ce7c2edd50bbb7b8-json.log",
	        "Name": "/pause-063268",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-063268:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-063268",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "854628bf54d21e1d1c2b7cb5018f445262206bc2818e9cf4ce7c2edd50bbb7b8",
	                "LowerDir": "/var/lib/docker/overlay2/473efb73f9bbf71c3f84444c41621d215a40c833c7c39aa6cac7d656220abd11-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/473efb73f9bbf71c3f84444c41621d215a40c833c7c39aa6cac7d656220abd11/merged",
	                "UpperDir": "/var/lib/docker/overlay2/473efb73f9bbf71c3f84444c41621d215a40c833c7c39aa6cac7d656220abd11/diff",
	                "WorkDir": "/var/lib/docker/overlay2/473efb73f9bbf71c3f84444c41621d215a40c833c7c39aa6cac7d656220abd11/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-063268",
	                "Source": "/var/lib/docker/volumes/pause-063268/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-063268",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-063268",
	                "name.minikube.sigs.k8s.io": "pause-063268",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "acb16375412b336ead898f1ba361363bdc3c70070303d5b39e40c45b703c4692",
	            "SandboxKey": "/var/run/docker/netns/acb16375412b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33323"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33324"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33327"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33325"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33326"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-063268": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:9f:82:1d:4e:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2dd6a5d24caa4467dcfd338fc6fe271087a3ce69a56c62026d391641d195417c",
	                    "EndpointID": "21781e7631eda6d21633e9647b8c53ba39fbc56696e637192d3a5a98f1f0c291",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-063268",
	                        "854628bf54d2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-063268 -n pause-063268
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-063268 -n pause-063268: exit status 2 (419.899365ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-063268 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-063268 logs -n 25: (1.741680537s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ list -p multinode-458368                                                                                         │ multinode-458368            │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ start   │ -p multinode-458368-m02 --driver=docker  --container-runtime=crio                                                │ multinode-458368-m02        │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ start   │ -p multinode-458368-m03 --driver=docker  --container-runtime=crio                                                │ multinode-458368-m03        │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:29 UTC │
	│ node    │ add -p multinode-458368                                                                                          │ multinode-458368            │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p multinode-458368-m03                                                                                          │ multinode-458368-m03        │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ delete  │ -p multinode-458368                                                                                              │ multinode-458368            │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p scheduled-stop-363352 --memory=3072 --driver=docker  --container-runtime=crio                                 │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ stop    │ -p scheduled-stop-363352 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ stop    │ -p scheduled-stop-363352 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ stop    │ -p scheduled-stop-363352 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ stop    │ -p scheduled-stop-363352 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ stop    │ -p scheduled-stop-363352 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ stop    │ -p scheduled-stop-363352 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ stop    │ -p scheduled-stop-363352 --cancel-scheduled                                                                      │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ stop    │ -p scheduled-stop-363352 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:30 UTC │                     │
	│ stop    │ -p scheduled-stop-363352 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:30 UTC │                     │
	│ stop    │ -p scheduled-stop-363352 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:30 UTC │ 27 Dec 25 20:30 UTC │
	│ delete  │ -p scheduled-stop-363352                                                                                         │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:31 UTC │ 27 Dec 25 20:31 UTC │
	│ start   │ -p insufficient-storage-170209 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio │ insufficient-storage-170209 │ jenkins │ v1.37.0 │ 27 Dec 25 20:31 UTC │                     │
	│ delete  │ -p insufficient-storage-170209                                                                                   │ insufficient-storage-170209 │ jenkins │ v1.37.0 │ 27 Dec 25 20:31 UTC │ 27 Dec 25 20:31 UTC │
	│ start   │ -p pause-063268 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio        │ pause-063268                │ jenkins │ v1.37.0 │ 27 Dec 25 20:31 UTC │ 27 Dec 25 20:32 UTC │
	│ start   │ -p missing-upgrade-655901 --memory=3072 --driver=docker  --container-runtime=crio                                │ missing-upgrade-655901      │ jenkins │ v1.35.0 │ 27 Dec 25 20:31 UTC │ 27 Dec 25 20:32 UTC │
	│ start   │ -p pause-063268 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ pause-063268                │ jenkins │ v1.37.0 │ 27 Dec 25 20:32 UTC │ 27 Dec 25 20:32 UTC │
	│ start   │ -p missing-upgrade-655901 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio         │ missing-upgrade-655901      │ jenkins │ v1.37.0 │ 27 Dec 25 20:32 UTC │                     │
	│ pause   │ -p pause-063268 --alsologtostderr -v=5                                                                           │ pause-063268                │ jenkins │ v1.37.0 │ 27 Dec 25 20:32 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:32:24
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:32:24.429107  416334 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:32:24.429327  416334 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:32:24.429355  416334 out.go:374] Setting ErrFile to fd 2...
	I1227 20:32:24.429375  416334 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:32:24.430428  416334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:32:24.431350  416334 out.go:368] Setting JSON to false
	I1227 20:32:24.432257  416334 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8097,"bootTime":1766859448,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:32:24.432351  416334 start.go:143] virtualization:  
	I1227 20:32:24.437438  416334 out.go:179] * [missing-upgrade-655901] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:32:24.443167  416334 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:32:24.443251  416334 notify.go:221] Checking for updates...
	I1227 20:32:24.446847  416334 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:32:24.452173  416334 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:32:24.455460  416334 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:32:24.458310  416334 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:32:24.461284  416334 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:32:24.465639  416334 config.go:182] Loaded profile config "missing-upgrade-655901": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1227 20:32:24.470371  416334 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I1227 20:32:24.473170  416334 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:32:24.508351  416334 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:32:24.508451  416334 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:32:24.595366  416334 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 20:32:24.582872576 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:32:24.595460  416334 docker.go:319] overlay module found
	I1227 20:32:24.598643  416334 out.go:179] * Using the docker driver based on existing profile
	I1227 20:32:24.601773  416334 start.go:309] selected driver: docker
	I1227 20:32:24.601798  416334 start.go:928] validating driver "docker" against &{Name:missing-upgrade-655901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-655901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:32:24.601893  416334 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:32:24.602543  416334 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:32:24.692946  416334 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 20:32:24.681993852 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:32:24.693258  416334 cni.go:84] Creating CNI manager for ""
	I1227 20:32:24.693314  416334 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:32:24.693348  416334 start.go:353] cluster config:
	{Name:missing-upgrade-655901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-655901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:32:24.696784  416334 out.go:179] * Starting "missing-upgrade-655901" primary control-plane node in "missing-upgrade-655901" cluster
	I1227 20:32:24.699456  416334 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:32:24.702535  416334 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:32:24.705269  416334 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1227 20:32:24.705309  416334 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:32:24.705318  416334 cache.go:65] Caching tarball of preloaded images
	I1227 20:32:24.705392  416334 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:32:24.705402  416334 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1227 20:32:24.705583  416334 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1227 20:32:24.705850  416334 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/missing-upgrade-655901/config.json ...
	I1227 20:32:24.736082  416334 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I1227 20:32:24.736102  416334 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I1227 20:32:24.736116  416334 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:32:24.736143  416334 start.go:360] acquireMachinesLock for missing-upgrade-655901: {Name:mk7162be709e759f3e9b267b63aa1c582dfe1fe9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:32:24.736194  416334 start.go:364] duration metric: took 31.277µs to acquireMachinesLock for "missing-upgrade-655901"
	I1227 20:32:24.736212  416334 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:32:24.736217  416334 fix.go:54] fixHost starting: 
	I1227 20:32:24.736481  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:24.761177  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	I1227 20:32:24.761241  416334 fix.go:112] recreateIfNeeded on missing-upgrade-655901: state= err=unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:24.761255  416334 fix.go:117] machineExists: false. err=machine does not exist
	I1227 20:32:24.764663  416334 out.go:179] * docker "missing-upgrade-655901" container is missing, will recreate.
	I1227 20:32:23.767001  415328 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:32:23.767021  415328 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:32:23.767076  415328 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:32:23.811582  415328 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:32:23.811662  415328 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:32:23.811685  415328 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 20:32:23.811808  415328 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-063268 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:pause-063268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:32:23.811939  415328 ssh_runner.go:195] Run: crio config
	I1227 20:32:23.917048  415328 cni.go:84] Creating CNI manager for ""
	I1227 20:32:23.917121  415328 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:32:23.917155  415328 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:32:23.917207  415328 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-063268 NodeName:pause-063268 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:32:23.917398  415328 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-063268"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:32:23.917520  415328 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:32:23.926745  415328 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:32:23.926903  415328 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:32:23.943128  415328 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1227 20:32:23.960616  415328 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:32:23.979001  415328 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1227 20:32:23.999508  415328 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:32:24.004357  415328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:32:24.216611  415328 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:32:24.251960  415328 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268 for IP: 192.168.76.2
	I1227 20:32:24.251984  415328 certs.go:195] generating shared ca certs ...
	I1227 20:32:24.252000  415328 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:32:24.252166  415328 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:32:24.252214  415328 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:32:24.252224  415328 certs.go:257] generating profile certs ...
	I1227 20:32:24.252315  415328 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/client.key
	I1227 20:32:24.252392  415328 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/apiserver.key.8a7ef7a8
	I1227 20:32:24.252439  415328 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/proxy-client.key
	I1227 20:32:24.252550  415328 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:32:24.252587  415328 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:32:24.252599  415328 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:32:24.252627  415328 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:32:24.252661  415328 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:32:24.252688  415328 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:32:24.252739  415328 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:32:24.253345  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:32:24.303965  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:32:24.325083  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:32:24.347104  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:32:24.411826  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1227 20:32:24.429805  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:32:24.448906  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:32:24.468089  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:32:24.493604  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:32:24.513858  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:32:24.537991  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:32:24.557344  415328 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:32:24.569908  415328 ssh_runner.go:195] Run: openssl version
	I1227 20:32:24.582793  415328 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:32:24.593629  415328 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:32:24.603282  415328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:32:24.609610  415328 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:32:24.609708  415328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:32:24.656002  415328 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:32:24.663562  415328 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:32:24.670798  415328 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:32:24.679560  415328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:32:24.686269  415328 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:32:24.686422  415328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:32:24.730677  415328 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:32:24.740273  415328 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:32:24.747820  415328 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:32:24.755688  415328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:32:24.760033  415328 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:32:24.760115  415328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:32:24.802544  415328 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:32:24.811373  415328 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:32:24.815648  415328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:32:24.862385  415328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:32:24.903603  415328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:32:24.944504  415328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:32:24.985152  415328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:32:25.025996  415328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:32:25.067803  415328 kubeadm.go:401] StartCluster: {Name:pause-063268 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-063268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:32:25.068008  415328 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:32:25.068107  415328 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:32:25.097682  415328 cri.go:96] found id: "416558ce12be971f6a4b6e7ef366b6cd5749f709d73875a41b18c0ee365732fb"
	I1227 20:32:25.097702  415328 cri.go:96] found id: "b6029d6a282d2e2b9baf6745700225b920389457d9eaed47d3219d6a1698087b"
	I1227 20:32:25.097707  415328 cri.go:96] found id: "dc98608038347ea5a66daf1ae3446ca17649266730a5d13c8aa1465c8a6f3124"
	I1227 20:32:25.097710  415328 cri.go:96] found id: "9777b2b15db10899b07ba1594d186322030830e3fdb8ddbd2a0f20737d3d28c8"
	I1227 20:32:25.097713  415328 cri.go:96] found id: "d3397c067b48bfc38eea1e3595441ae62853459c7ca60afd179fd9d4a21ac34d"
	I1227 20:32:25.097716  415328 cri.go:96] found id: "9e51e3ee441e237964c680e42a957dc502b056f13d851926ff327daf2de4f7d4"
	I1227 20:32:25.097720  415328 cri.go:96] found id: "768d0620e2839f7eab59ee37f96feaa3ecf0b3a65150b795226874a68029be62"
	I1227 20:32:25.097723  415328 cri.go:96] found id: ""
	I1227 20:32:25.097776  415328 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:32:25.117146  415328 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:32:25Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:32:25.117221  415328 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:32:25.125258  415328 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:32:25.125278  415328 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:32:25.125331  415328 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:32:25.133508  415328 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:32:25.134220  415328 kubeconfig.go:125] found "pause-063268" server: "https://192.168.76.2:8443"
	I1227 20:32:25.135059  415328 kapi.go:59] client config for pause-063268: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 20:32:25.135587  415328 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 20:32:25.135605  415328 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 20:32:25.135612  415328 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 20:32:25.135616  415328 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 20:32:25.135621  415328 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 20:32:25.135632  415328 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 20:32:25.135926  415328 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:32:25.144713  415328 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 20:32:25.144745  415328 kubeadm.go:602] duration metric: took 19.460214ms to restartPrimaryControlPlane
	I1227 20:32:25.144755  415328 kubeadm.go:403] duration metric: took 76.975943ms to StartCluster
	I1227 20:32:25.144771  415328 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:32:25.144847  415328 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:32:25.145765  415328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:32:25.146002  415328 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:32:25.146263  415328 config.go:182] Loaded profile config "pause-063268": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:32:25.146319  415328 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:32:25.149597  415328 out.go:179] * Enabled addons: 
	I1227 20:32:25.149656  415328 out.go:179] * Verifying Kubernetes components...
	I1227 20:32:25.152484  415328 addons.go:530] duration metric: took 6.157468ms for enable addons: enabled=[]
	I1227 20:32:25.152535  415328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:32:25.295072  415328 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:32:25.307821  415328 node_ready.go:35] waiting up to 6m0s for node "pause-063268" to be "Ready" ...
	I1227 20:32:24.767472  416334 delete.go:124] DEMOLISHING missing-upgrade-655901 ...
	I1227 20:32:24.767574  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:24.786963  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	W1227 20:32:24.787017  416334 stop.go:83] unable to get state: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:24.787035  416334 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:24.787486  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:24.806871  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	I1227 20:32:24.806944  416334 delete.go:82] Unable to get host status for missing-upgrade-655901, assuming it has already been deleted: state: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:24.807001  416334 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-655901
	W1227 20:32:24.827609  416334 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-655901 returned with exit code 1
	I1227 20:32:24.827656  416334 kic.go:371] could not find the container missing-upgrade-655901 to remove it. will try anyways
	I1227 20:32:24.827719  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:24.844672  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	W1227 20:32:24.844751  416334 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:24.844822  416334 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-655901 /bin/bash -c "sudo init 0"
	W1227 20:32:24.862348  416334 cli_runner.go:211] docker exec --privileged -t missing-upgrade-655901 /bin/bash -c "sudo init 0" returned with exit code 1
	I1227 20:32:24.862406  416334 oci.go:659] error shutdown missing-upgrade-655901: docker exec --privileged -t missing-upgrade-655901 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:25.862667  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:25.879294  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	I1227 20:32:25.879356  416334 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:25.879385  416334 oci.go:673] temporary error: container missing-upgrade-655901 status is  but expect it to be exited
	I1227 20:32:25.879430  416334 retry.go:84] will retry after 500ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:26.339748  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:26.354064  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	I1227 20:32:26.354125  416334 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:26.354143  416334 oci.go:673] temporary error: container missing-upgrade-655901 status is  but expect it to be exited
	I1227 20:32:27.285339  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:27.306901  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	I1227 20:32:27.306959  416334 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:27.306979  416334 oci.go:673] temporary error: container missing-upgrade-655901 status is  but expect it to be exited
	I1227 20:32:28.387367  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:28.405640  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	I1227 20:32:28.405697  416334 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:28.405706  416334 oci.go:673] temporary error: container missing-upgrade-655901 status is  but expect it to be exited
	I1227 20:32:30.257734  415328 node_ready.go:49] node "pause-063268" is "Ready"
	I1227 20:32:30.257760  415328 node_ready.go:38] duration metric: took 4.949901637s for node "pause-063268" to be "Ready" ...
	I1227 20:32:30.257774  415328 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:32:30.257835  415328 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:32:30.275525  415328 api_server.go:72] duration metric: took 5.129493566s to wait for apiserver process to appear ...
	I1227 20:32:30.275549  415328 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:32:30.275569  415328 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:32:30.306377  415328 api_server.go:325] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 20:32:30.306448  415328 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 20:32:30.776649  415328 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:32:30.785864  415328 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:32:30.785934  415328 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:32:31.276199  415328 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:32:31.285250  415328 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:32:31.285289  415328 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:32:31.775859  415328 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:32:31.783951  415328 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 20:32:31.784963  415328 api_server.go:141] control plane version: v1.35.0
	I1227 20:32:31.784990  415328 api_server.go:131] duration metric: took 1.509434005s to wait for apiserver health ...
	I1227 20:32:31.785000  415328 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:32:31.788349  415328 system_pods.go:59] 7 kube-system pods found
	I1227 20:32:31.788386  415328 system_pods.go:61] "coredns-7d764666f9-c22b6" [6b763ed8-dbdc-44c4-a42a-5cd6236da10f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:32:31.788399  415328 system_pods.go:61] "etcd-pause-063268" [fad2b2af-e6b8-4b24-a260-9ced85b4bd27] Running
	I1227 20:32:31.788405  415328 system_pods.go:61] "kindnet-g5stk" [4e9e88b5-4c0a-4667-b384-b7d7d0a91df3] Running
	I1227 20:32:31.788410  415328 system_pods.go:61] "kube-apiserver-pause-063268" [1503e82e-99ff-429d-a01c-94fefae9f4da] Running
	I1227 20:32:31.788417  415328 system_pods.go:61] "kube-controller-manager-pause-063268" [2165ed05-52a6-42da-bb0c-4625fd414ee9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:32:31.788427  415328 system_pods.go:61] "kube-proxy-hkrgp" [d611fc25-9029-493d-9149-7d9ba7551fc1] Running
	I1227 20:32:31.788432  415328 system_pods.go:61] "kube-scheduler-pause-063268" [8d49927b-d593-40e4-9cac-7699fd215da1] Running
	I1227 20:32:31.788440  415328 system_pods.go:74] duration metric: took 3.434534ms to wait for pod list to return data ...
	I1227 20:32:31.788448  415328 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:32:31.791131  415328 default_sa.go:45] found service account: "default"
	I1227 20:32:31.791155  415328 default_sa.go:55] duration metric: took 2.692208ms for default service account to be created ...
	I1227 20:32:31.791166  415328 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:32:31.793708  415328 system_pods.go:86] 7 kube-system pods found
	I1227 20:32:31.793740  415328 system_pods.go:89] "coredns-7d764666f9-c22b6" [6b763ed8-dbdc-44c4-a42a-5cd6236da10f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:32:31.793747  415328 system_pods.go:89] "etcd-pause-063268" [fad2b2af-e6b8-4b24-a260-9ced85b4bd27] Running
	I1227 20:32:31.793753  415328 system_pods.go:89] "kindnet-g5stk" [4e9e88b5-4c0a-4667-b384-b7d7d0a91df3] Running
	I1227 20:32:31.793758  415328 system_pods.go:89] "kube-apiserver-pause-063268" [1503e82e-99ff-429d-a01c-94fefae9f4da] Running
	I1227 20:32:31.793765  415328 system_pods.go:89] "kube-controller-manager-pause-063268" [2165ed05-52a6-42da-bb0c-4625fd414ee9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:32:31.793771  415328 system_pods.go:89] "kube-proxy-hkrgp" [d611fc25-9029-493d-9149-7d9ba7551fc1] Running
	I1227 20:32:31.793779  415328 system_pods.go:89] "kube-scheduler-pause-063268" [8d49927b-d593-40e4-9cac-7699fd215da1] Running
	I1227 20:32:31.793787  415328 system_pods.go:126] duration metric: took 2.614424ms to wait for k8s-apps to be running ...
	I1227 20:32:31.793794  415328 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:32:31.793858  415328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:32:31.806632  415328 system_svc.go:56] duration metric: took 12.827919ms WaitForService to wait for kubelet
	I1227 20:32:31.806661  415328 kubeadm.go:587] duration metric: took 6.660634415s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:32:31.806680  415328 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:32:31.809627  415328 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:32:31.809656  415328 node_conditions.go:123] node cpu capacity is 2
	I1227 20:32:31.809669  415328 node_conditions.go:105] duration metric: took 2.966622ms to run NodePressure ...
	I1227 20:32:31.809682  415328 start.go:242] waiting for startup goroutines ...
	I1227 20:32:31.809690  415328 start.go:247] waiting for cluster config update ...
	I1227 20:32:31.809698  415328 start.go:256] writing updated cluster config ...
	I1227 20:32:31.809990  415328 ssh_runner.go:195] Run: rm -f paused
	I1227 20:32:31.813564  415328 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:32:31.814189  415328 kapi.go:59] client config for pause-063268: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 20:32:31.816886  415328 pod_ready.go:83] waiting for pod "coredns-7d764666f9-c22b6" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:30.935179  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:30.950849  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	I1227 20:32:30.950909  416334 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:30.950919  416334 oci.go:673] temporary error: container missing-upgrade-655901 status is  but expect it to be exited
	W1227 20:32:33.822838  415328 pod_ready.go:104] pod "coredns-7d764666f9-c22b6" is not "Ready", error: <nil>
	W1227 20:32:35.822878  415328 pod_ready.go:104] pod "coredns-7d764666f9-c22b6" is not "Ready", error: <nil>
	W1227 20:32:38.321926  415328 pod_ready.go:104] pod "coredns-7d764666f9-c22b6" is not "Ready", error: <nil>
	I1227 20:32:34.585153  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:34.603255  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	I1227 20:32:34.603320  416334 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:34.603355  416334 oci.go:673] temporary error: container missing-upgrade-655901 status is  but expect it to be exited
	I1227 20:32:34.603402  416334 retry.go:84] will retry after 3.4s: couldn't verify container is exited. %v: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:37.975207  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:37.990200  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	I1227 20:32:37.990265  416334 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:37.990281  416334 oci.go:673] temporary error: container missing-upgrade-655901 status is  but expect it to be exited
	I1227 20:32:37.990314  416334 retry.go:84] will retry after 5.5s: couldn't verify container is exited. %v: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:39.322469  415328 pod_ready.go:94] pod "coredns-7d764666f9-c22b6" is "Ready"
	I1227 20:32:39.322498  415328 pod_ready.go:86] duration metric: took 7.5055902s for pod "coredns-7d764666f9-c22b6" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:39.325053  415328 pod_ready.go:83] waiting for pod "etcd-pause-063268" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:40.331029  415328 pod_ready.go:94] pod "etcd-pause-063268" is "Ready"
	I1227 20:32:40.331062  415328 pod_ready.go:86] duration metric: took 1.005981297s for pod "etcd-pause-063268" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:40.333633  415328 pod_ready.go:83] waiting for pod "kube-apiserver-pause-063268" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:40.338377  415328 pod_ready.go:94] pod "kube-apiserver-pause-063268" is "Ready"
	I1227 20:32:40.338405  415328 pod_ready.go:86] duration metric: took 4.743551ms for pod "kube-apiserver-pause-063268" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:40.340578  415328 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-063268" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:41.845812  415328 pod_ready.go:94] pod "kube-controller-manager-pause-063268" is "Ready"
	I1227 20:32:41.845837  415328 pod_ready.go:86] duration metric: took 1.505230684s for pod "kube-controller-manager-pause-063268" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:41.848389  415328 pod_ready.go:83] waiting for pod "kube-proxy-hkrgp" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:42.121661  415328 pod_ready.go:94] pod "kube-proxy-hkrgp" is "Ready"
	I1227 20:32:42.121691  415328 pod_ready.go:86] duration metric: took 273.27958ms for pod "kube-proxy-hkrgp" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:42.321518  415328 pod_ready.go:83] waiting for pod "kube-scheduler-pause-063268" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:42.720454  415328 pod_ready.go:94] pod "kube-scheduler-pause-063268" is "Ready"
	I1227 20:32:42.720487  415328 pod_ready.go:86] duration metric: took 398.935671ms for pod "kube-scheduler-pause-063268" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:42.720500  415328 pod_ready.go:40] duration metric: took 10.906898856s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:32:42.774516  415328 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 20:32:42.777647  415328 out.go:203] 
	W1227 20:32:42.780539  415328 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 20:32:42.783306  415328 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:32:42.786051  415328 out.go:179] * Done! kubectl is now configured to use "pause-063268" cluster and "default" namespace by default
	I1227 20:32:43.539322  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:43.555055  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	I1227 20:32:43.555118  416334 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:43.555140  416334 oci.go:673] temporary error: container missing-upgrade-655901 status is  but expect it to be exited
	I1227 20:32:43.555174  416334 oci.go:88] couldn't shut down missing-upgrade-655901 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	 
	I1227 20:32:43.555235  416334 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-655901
	I1227 20:32:43.571009  416334 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-655901
	W1227 20:32:43.587635  416334 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-655901 returned with exit code 1
	I1227 20:32:43.587724  416334 cli_runner.go:164] Run: docker network inspect missing-upgrade-655901 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:32:43.603735  416334 cli_runner.go:164] Run: docker network rm missing-upgrade-655901
	I1227 20:32:43.708556  416334 fix.go:124] Sleeping 1 second for extra luck!
	I1227 20:32:44.709065  416334 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 27 20:32:26 pause-063268 crio[2086]: time="2025-12-27T20:32:26.984460987Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:32:27 pause-063268 crio[2086]: time="2025-12-27T20:32:27.02477657Z" level=info msg="Created container 2d5187c09e0d2282d9b217df137e91d2bde4c8df84c181dfc67dd7d051c343d1: kube-system/kube-controller-manager-pause-063268/kube-controller-manager" id=1d8cb867-5470-42df-8b3a-e953a28e5f3a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:32:27 pause-063268 crio[2086]: time="2025-12-27T20:32:27.025855274Z" level=info msg="Starting container: 2d5187c09e0d2282d9b217df137e91d2bde4c8df84c181dfc67dd7d051c343d1" id=9078e8ca-f7b0-4963-a64d-6555150a49f0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:32:27 pause-063268 crio[2086]: time="2025-12-27T20:32:27.029171943Z" level=info msg="Created container 46206dca1d54589a2dc7e056f6533461fa94f3ac29b80ee9ef2c43ecd0f219da: kube-system/kube-apiserver-pause-063268/kube-apiserver" id=53292ce7-4163-4beb-ab5f-b7f13fb408fe name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:32:27 pause-063268 crio[2086]: time="2025-12-27T20:32:27.030071682Z" level=info msg="Starting container: 46206dca1d54589a2dc7e056f6533461fa94f3ac29b80ee9ef2c43ecd0f219da" id=96a84d6a-6d1c-4a1a-adf7-522eec1ef57c name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:32:27 pause-063268 crio[2086]: time="2025-12-27T20:32:27.033021122Z" level=info msg="Started container" PID=2420 containerID=2d5187c09e0d2282d9b217df137e91d2bde4c8df84c181dfc67dd7d051c343d1 description=kube-system/kube-controller-manager-pause-063268/kube-controller-manager id=9078e8ca-f7b0-4963-a64d-6555150a49f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=81536374a07789b57d3e33b9b5d43c8793b24b7ca122b858b0f267dfc8ccfa53
	Dec 27 20:32:27 pause-063268 crio[2086]: time="2025-12-27T20:32:27.042775434Z" level=info msg="Created container 3dcb1acad15eb773b8e4d1bf48eff17bd636db39f14d8b4c2114369015e50c99: kube-system/etcd-pause-063268/etcd" id=a31e55b9-8649-43a5-81a4-a286dabc0ff4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:32:27 pause-063268 crio[2086]: time="2025-12-27T20:32:27.044217618Z" level=info msg="Started container" PID=2422 containerID=46206dca1d54589a2dc7e056f6533461fa94f3ac29b80ee9ef2c43ecd0f219da description=kube-system/kube-apiserver-pause-063268/kube-apiserver id=96a84d6a-6d1c-4a1a-adf7-522eec1ef57c name=/runtime.v1.RuntimeService/StartContainer sandboxID=470773e6287048246b0a0683cf448359686d58cffb1fee0e0e6cab89195cdd3f
	Dec 27 20:32:27 pause-063268 crio[2086]: time="2025-12-27T20:32:27.045100158Z" level=info msg="Starting container: 3dcb1acad15eb773b8e4d1bf48eff17bd636db39f14d8b4c2114369015e50c99" id=521f3aef-065c-42c6-89b5-d08448b9f5d8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:32:27 pause-063268 crio[2086]: time="2025-12-27T20:32:27.062931609Z" level=info msg="Started container" PID=2426 containerID=3dcb1acad15eb773b8e4d1bf48eff17bd636db39f14d8b4c2114369015e50c99 description=kube-system/etcd-pause-063268/etcd id=521f3aef-065c-42c6-89b5-d08448b9f5d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0caca91153ab937a97f873ecb14955ebbf04b37e5acc5dff24efebc8eddb6b8e
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.180538218Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.184771619Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.184804775Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.184827691Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.187860125Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.18789982Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.187923065Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.191004531Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.191035833Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.191057092Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.194691146Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.194722981Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.194745175Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.197904638Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.197951923Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	3dcb1acad15eb       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     19 seconds ago       Running             etcd                      1                   0caca91153ab9       etcd-pause-063268                      kube-system
	46206dca1d545       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     19 seconds ago       Running             kube-apiserver            1                   470773e628704       kube-apiserver-pause-063268            kube-system
	2d5187c09e0d2       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     19 seconds ago       Running             kube-controller-manager   1                   81536374a0778       kube-controller-manager-pause-063268   kube-system
	3abedd3ca1b67       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     19 seconds ago       Running             kube-scheduler            1                   35b7de4fd04a2       kube-scheduler-pause-063268            kube-system
	fa108279b1eb9       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     19 seconds ago       Running             coredns                   1                   7f0ade60dd2c0       coredns-7d764666f9-c22b6               kube-system
	8cac825e5dc8b       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     19 seconds ago       Running             kube-proxy                1                   02e1eb74c1bd4       kube-proxy-hkrgp                       kube-system
	7d35bd38f16e2       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     19 seconds ago       Running             kindnet-cni               1                   0edcd6415ee05       kindnet-g5stk                          kube-system
	416558ce12be9       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     35 seconds ago       Exited              coredns                   0                   7f0ade60dd2c0       coredns-7d764666f9-c22b6               kube-system
	b6029d6a282d2       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   46 seconds ago       Exited              kindnet-cni               0                   0edcd6415ee05       kindnet-g5stk                          kube-system
	dc98608038347       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     49 seconds ago       Exited              kube-proxy                0                   02e1eb74c1bd4       kube-proxy-hkrgp                       kube-system
	9777b2b15db10       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     About a minute ago   Exited              kube-scheduler            0                   35b7de4fd04a2       kube-scheduler-pause-063268            kube-system
	d3397c067b48b       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     About a minute ago   Exited              kube-controller-manager   0                   81536374a0778       kube-controller-manager-pause-063268   kube-system
	9e51e3ee441e2       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     About a minute ago   Exited              kube-apiserver            0                   470773e628704       kube-apiserver-pause-063268            kube-system
	768d0620e2839       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     About a minute ago   Exited              etcd                      0                   0caca91153ab9       etcd-pause-063268                      kube-system
	
	
	==> coredns [416558ce12be971f6a4b6e7ef366b6cd5749f709d73875a41b18c0ee365732fb] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:43128 - 25322 "HINFO IN 3771375815311659064.1043972883728246632. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021885539s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fa108279b1eb9ceb70588f2233e4945ac93e186f6bcf3c4ee7c054449cf0f75b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:47229 - 12593 "HINFO IN 4191501068697679840.7882993322500420963. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008049406s
	
	
	==> describe nodes <==
	Name:               pause-063268
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-063268
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=pause-063268
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_31_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:31:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-063268
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:32:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:32:33 +0000   Sat, 27 Dec 2025 20:31:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:32:33 +0000   Sat, 27 Dec 2025 20:31:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:32:33 +0000   Sat, 27 Dec 2025 20:31:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:32:33 +0000   Sat, 27 Dec 2025 20:32:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-063268
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                aa57afc7-b131-4907-b0d7-ed80e5a1309f
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-c22b6                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     50s
	  kube-system                 etcd-pause-063268                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         55s
	  kube-system                 kindnet-g5stk                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      50s
	  kube-system                 kube-apiserver-pause-063268             250m (12%)    0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-controller-manager-pause-063268    200m (10%)    0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-proxy-hkrgp                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-scheduler-pause-063268             100m (5%)     0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  51s   node-controller  Node pause-063268 event: Registered Node pause-063268 in Controller
	  Normal  RegisteredNode  14s   node-controller  Node pause-063268 event: Registered Node pause-063268 in Controller
	
	
	==> dmesg <==
	[Dec27 20:04] overlayfs: idmapped layers are currently not supported
	[Dec27 20:05] overlayfs: idmapped layers are currently not supported
	[Dec27 20:06] overlayfs: idmapped layers are currently not supported
	[Dec27 20:07] overlayfs: idmapped layers are currently not supported
	[  +3.687478] overlayfs: idmapped layers are currently not supported
	[Dec27 20:15] overlayfs: idmapped layers are currently not supported
	[  +3.163851] overlayfs: idmapped layers are currently not supported
	[Dec27 20:16] overlayfs: idmapped layers are currently not supported
	[ +35.129102] overlayfs: idmapped layers are currently not supported
	[Dec27 20:17] overlayfs: idmapped layers are currently not supported
	[Dec27 20:19] overlayfs: idmapped layers are currently not supported
	[ +36.244108] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 20:22] overlayfs: idmapped layers are currently not supported
	[Dec27 20:23] overlayfs: idmapped layers are currently not supported
	[Dec27 20:24] overlayfs: idmapped layers are currently not supported
	[Dec27 20:25] overlayfs: idmapped layers are currently not supported
	[ +35.447549] overlayfs: idmapped layers are currently not supported
	[Dec27 20:26] overlayfs: idmapped layers are currently not supported
	[Dec27 20:27] overlayfs: idmapped layers are currently not supported
	[  +6.770645] overlayfs: idmapped layers are currently not supported
	[Dec27 20:28] overlayfs: idmapped layers are currently not supported
	[ +25.872751] overlayfs: idmapped layers are currently not supported
	[Dec27 20:29] overlayfs: idmapped layers are currently not supported
	[ +32.997137] overlayfs: idmapped layers are currently not supported
	[Dec27 20:31] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3dcb1acad15eb773b8e4d1bf48eff17bd636db39f14d8b4c2114369015e50c99] <==
	{"level":"info","ts":"2025-12-27T20:32:27.171291Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T20:32:27.171382Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T20:32:27.178370Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T20:32:27.182171Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T20:32:27.182507Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T20:32:27.181769Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T20:32:27.185804Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T20:32:27.649487Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:32:27.649543Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:32:27.649583Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:32:27.649594Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:32:27.649610Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:32:27.657499Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:32:27.657537Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:32:27.657552Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:32:27.657560Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:32:27.661620Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-063268 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:32:27.661660Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:32:27.661881Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:32:27.662782Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:32:27.664667Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:32:27.675279Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:32:27.675389Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:32:27.676191Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:32:27.683724Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> etcd [768d0620e2839f7eab59ee37f96feaa3ecf0b3a65150b795226874a68029be62] <==
	{"level":"info","ts":"2025-12-27T20:31:46.721721Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:31:46.721803Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:31:46.711212Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:31:46.721925Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T20:31:46.722020Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T20:31:46.726517Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T20:31:46.727045Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:32:15.949166Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-27T20:32:15.949335Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-063268","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-12-27T20:32:15.949421Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-27T20:32:16.113426Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-27T20:32:16.113547Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T20:32:16.113644Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"warn","ts":"2025-12-27T20:32:16.113646Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-27T20:32:16.113715Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-27T20:32:16.113754Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-27T20:32:16.113724Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-27T20:32:16.113837Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-27T20:32:16.113761Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-27T20:32:16.113928Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T20:32:16.113823Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-27T20:32:16.117144Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-12-27T20:32:16.117292Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T20:32:16.117353Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T20:32:16.117387Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-063268","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 20:32:47 up  2:15,  0 user,  load average: 2.36, 1.95, 1.80
	Linux pause-063268 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7d35bd38f16e24300670b1d7b0a1bb5c51e54ae70ff336a27ea418629779fd43] <==
	I1227 20:32:26.936541       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:32:26.937800       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 20:32:26.937944       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:32:26.937956       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:32:26.937966       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:32:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:32:27.180198       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:32:27.180286       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:32:27.180319       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:32:27.181801       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 20:32:27.229856       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1227 20:32:27.230079       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 20:32:27.230138       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 20:32:27.230291       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1227 20:32:30.381687       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:32:30.381732       1 metrics.go:72] Registering metrics
	I1227 20:32:30.381781       1 controller.go:711] "Syncing nftables rules"
	I1227 20:32:37.180082       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:32:37.180160       1 main.go:301] handling current node
	I1227 20:32:47.180314       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:32:47.180365       1 main.go:301] handling current node
	
	
	==> kindnet [b6029d6a282d2e2b9baf6745700225b920389457d9eaed47d3219d6a1698087b] <==
	I1227 20:32:00.640221       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:32:00.730113       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 20:32:00.730806       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:32:00.731630       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:32:00.732337       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:32:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:32:00.930138       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:32:00.930236       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:32:00.930271       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:32:00.931524       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 20:32:01.133526       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:32:01.133623       1 metrics.go:72] Registering metrics
	I1227 20:32:01.133726       1 controller.go:711] "Syncing nftables rules"
	I1227 20:32:10.845594       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:32:10.845657       1 main.go:301] handling current node
	
	
	==> kube-apiserver [46206dca1d54589a2dc7e056f6533461fa94f3ac29b80ee9ef2c43ecd0f219da] <==
	I1227 20:32:30.414284       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:32:30.426751       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:32:30.439638       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 20:32:30.440068       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 20:32:30.440351       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:32:30.444006       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 20:32:30.444063       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 20:32:30.444140       1 aggregator.go:187] initial CRD sync complete...
	I1227 20:32:30.444157       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 20:32:30.444165       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:32:30.444170       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:32:30.446333       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:30.446490       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:30.446567       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 20:32:30.446609       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 20:32:30.446747       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:32:30.447761       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 20:32:30.453769       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1227 20:32:30.463301       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 20:32:31.152391       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:32:32.241823       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:32:33.692743       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:32:33.744868       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:32:33.842885       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:32:33.945678       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [9e51e3ee441e237964c680e42a957dc502b056f13d851926ff327daf2de4f7d4] <==
	W1227 20:32:15.999083       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999108       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999134       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999165       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999187       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999211       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999238       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999263       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999288       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999314       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999341       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999365       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999395       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999423       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999451       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999478       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999505       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999533       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999561       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999589       1 logging.go:55] [core] [Channel #26 SubChannel #28]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999616       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999647       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999677       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999878       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999911       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [2d5187c09e0d2282d9b217df137e91d2bde4c8df84c181dfc67dd7d051c343d1] <==
	I1227 20:32:33.453080       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.453162       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.453231       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.453287       1 range_allocator.go:177] "Sending events to api server"
	I1227 20:32:33.453817       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.454916       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.456752       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 20:32:33.456785       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:32:33.456791       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.457043       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.457206       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.458319       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.458332       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.461598       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.461816       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.462023       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.462207       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.462566       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.464559       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:32:33.482872       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.558860       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.558885       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:32:33.558891       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:32:33.565113       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.695696       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-controller-manager [d3397c067b48bfc38eea1e3595441ae62853459c7ca60afd179fd9d4a21ac34d] <==
	I1227 20:31:56.097132       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.097261       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.097399       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.097426       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.097440       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.099445       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.100395       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.123970       1 range_allocator.go:177] "Sending events to api server"
	I1227 20:31:56.124010       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 20:31:56.124025       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:31:56.124032       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.100407       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.100420       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.100489       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.100497       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.124492       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 20:31:56.124565       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-063268"
	I1227 20:31:56.124613       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 20:31:56.101222       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.110828       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:31:56.178412       1 range_allocator.go:433] "Set node PodCIDR" node="pause-063268" podCIDRs=["10.244.0.0/24"]
	I1227 20:31:56.211578       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.211611       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:31:56.211617       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:31:56.224884       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [8cac825e5dc8b16adf125824bbe9d9e4396548dcbe927b2c4b11f08cff8dfa4d] <==
	I1227 20:32:27.313081       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:32:28.011140       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:32:30.431878       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:30.442094       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 20:32:30.457464       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:32:30.595234       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:32:30.595291       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:32:30.604032       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:32:30.604376       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:32:30.604551       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:32:30.606147       1 config.go:200] "Starting service config controller"
	I1227 20:32:30.606213       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:32:30.606257       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:32:30.606301       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:32:30.606339       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:32:30.606373       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:32:30.608741       1 config.go:309] "Starting node config controller"
	I1227 20:32:30.609648       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:32:30.609704       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:32:30.706928       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 20:32:30.707049       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:32:30.707128       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [dc98608038347ea5a66daf1ae3446ca17649266730a5d13c8aa1465c8a6f3124] <==
	I1227 20:31:57.784927       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:31:57.924255       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:31:58.024643       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:58.024778       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 20:31:58.024939       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:31:58.049093       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:31:58.049219       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:31:58.068636       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:31:58.069064       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:31:58.069088       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:31:58.078584       1 config.go:200] "Starting service config controller"
	I1227 20:31:58.079653       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:31:58.079702       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:31:58.079740       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:31:58.079888       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:31:58.079917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:31:58.086826       1 config.go:309] "Starting node config controller"
	I1227 20:31:58.086924       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:31:58.086958       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:31:58.180270       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:31:58.180381       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 20:31:58.180620       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3abedd3ca1b6781d775413bebb693eb5e728039da24f46af35b97da37204edf6] <==
	I1227 20:32:27.282947       1 serving.go:386] Generated self-signed cert in-memory
	W1227 20:32:30.230154       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 20:32:30.230256       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 20:32:30.230293       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 20:32:30.230325       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 20:32:30.461184       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 20:32:30.461271       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:32:30.470117       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:32:30.470219       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:32:30.471061       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 20:32:30.471133       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 20:32:30.570706       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [9777b2b15db10899b07ba1594d186322030830e3fdb8ddbd2a0f20737d3d28c8] <==
	E1227 20:31:49.547163       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:31:49.547247       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 20:31:49.547295       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:31:49.547445       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:31:49.547546       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 20:31:49.547637       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:31:49.547729       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:31:49.547818       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:31:49.547917       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 20:31:49.549498       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:31:49.549616       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:31:50.381537       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 20:31:50.505974       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:31:50.521405       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 20:31:50.579394       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:31:50.622871       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:31:50.673689       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:31:50.688828       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	I1227 20:31:52.541566       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:15.947748       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1227 20:32:15.947783       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1227 20:32:15.947802       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1227 20:32:15.947882       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:32:15.948105       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1227 20:32:15.948155       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 27 20:32:29 pause-063268 kubelet[1300]: E1227 20:32:29.984644    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-063268" containerName="kube-apiserver"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.126777    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-063268" containerName="etcd"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.211151    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-063268\" is forbidden: User \"system:node:pause-063268\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-063268' and this object" podUID="2c9686bd3c253dec5368eafec73fd79b" pod="kube-system/kube-controller-manager-pause-063268"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.217744    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-apiserver-pause-063268\" is forbidden: User \"system:node:pause-063268\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-063268' and this object" podUID="3893ee8cfec715dd431272f8c3f7e562" pod="kube-system/kube-apiserver-pause-063268"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.219296    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-pause-063268\" is forbidden: User \"system:node:pause-063268\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-063268' and this object" podUID="a926e6cfc4479a30b2a8bdacb35d5de1" pod="kube-system/kube-scheduler-pause-063268"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.220624    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"etcd-pause-063268\" is forbidden: User \"system:node:pause-063268\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-063268' and this object" podUID="86ccf717b08ffd220d7c6edcb0a83cad" pod="kube-system/etcd-pause-063268"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.222539    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-proxy-hkrgp\" is forbidden: User \"system:node:pause-063268\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-063268' and this object" podUID="d611fc25-9029-493d-9149-7d9ba7551fc1" pod="kube-system/kube-proxy-hkrgp"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.230538    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"kindnet-g5stk\" is forbidden: User \"system:node:pause-063268\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-063268' and this object" podUID="4e9e88b5-4c0a-4667-b384-b7d7d0a91df3" pod="kube-system/kindnet-g5stk"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.239920    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"coredns-7d764666f9-c22b6\" is forbidden: User \"system:node:pause-063268\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-063268' and this object" podUID="6b763ed8-dbdc-44c4-a42a-5cd6236da10f" pod="kube-system/coredns-7d764666f9-c22b6"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.257294    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-apiserver-pause-063268\" is forbidden: User \"system:node:pause-063268\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-063268' and this object" podUID="3893ee8cfec715dd431272f8c3f7e562" pod="kube-system/kube-apiserver-pause-063268"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.308529    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-pause-063268\" is forbidden: User \"system:node:pause-063268\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-063268' and this object" podUID="a926e6cfc4479a30b2a8bdacb35d5de1" pod="kube-system/kube-scheduler-pause-063268"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.336864    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"etcd-pause-063268\" is forbidden: User \"system:node:pause-063268\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-063268' and this object" podUID="86ccf717b08ffd220d7c6edcb0a83cad" pod="kube-system/etcd-pause-063268"
	Dec 27 20:32:31 pause-063268 kubelet[1300]: E1227 20:32:31.419580    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-063268" containerName="kube-controller-manager"
	Dec 27 20:32:32 pause-063268 kubelet[1300]: W1227 20:32:32.814430    1300 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 27 20:32:38 pause-063268 kubelet[1300]: E1227 20:32:38.983190    1300 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-c22b6" containerName="coredns"
	Dec 27 20:32:39 pause-063268 kubelet[1300]: E1227 20:32:39.399714    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-063268" containerName="kube-scheduler"
	Dec 27 20:32:39 pause-063268 kubelet[1300]: E1227 20:32:39.828208    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-063268" containerName="kube-apiserver"
	Dec 27 20:32:40 pause-063268 kubelet[1300]: E1227 20:32:40.006691    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-063268" containerName="kube-apiserver"
	Dec 27 20:32:40 pause-063268 kubelet[1300]: E1227 20:32:40.006985    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-063268" containerName="kube-scheduler"
	Dec 27 20:32:40 pause-063268 kubelet[1300]: E1227 20:32:40.128369    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-063268" containerName="etcd"
	Dec 27 20:32:41 pause-063268 kubelet[1300]: E1227 20:32:41.009006    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-063268" containerName="etcd"
	Dec 27 20:32:41 pause-063268 kubelet[1300]: E1227 20:32:41.428226    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-063268" containerName="kube-controller-manager"
	Dec 27 20:32:43 pause-063268 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:32:43 pause-063268 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:32:43 pause-063268 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-063268 -n pause-063268
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-063268 -n pause-063268: exit status 2 (541.689066ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-063268 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-063268
helpers_test.go:244: (dbg) docker inspect pause-063268:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "854628bf54d21e1d1c2b7cb5018f445262206bc2818e9cf4ce7c2edd50bbb7b8",
	        "Created": "2025-12-27T20:31:31.611853285Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 411170,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:31:32.499349396Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/854628bf54d21e1d1c2b7cb5018f445262206bc2818e9cf4ce7c2edd50bbb7b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/854628bf54d21e1d1c2b7cb5018f445262206bc2818e9cf4ce7c2edd50bbb7b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/854628bf54d21e1d1c2b7cb5018f445262206bc2818e9cf4ce7c2edd50bbb7b8/hosts",
	        "LogPath": "/var/lib/docker/containers/854628bf54d21e1d1c2b7cb5018f445262206bc2818e9cf4ce7c2edd50bbb7b8/854628bf54d21e1d1c2b7cb5018f445262206bc2818e9cf4ce7c2edd50bbb7b8-json.log",
	        "Name": "/pause-063268",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-063268:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-063268",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "854628bf54d21e1d1c2b7cb5018f445262206bc2818e9cf4ce7c2edd50bbb7b8",
	                "LowerDir": "/var/lib/docker/overlay2/473efb73f9bbf71c3f84444c41621d215a40c833c7c39aa6cac7d656220abd11-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/473efb73f9bbf71c3f84444c41621d215a40c833c7c39aa6cac7d656220abd11/merged",
	                "UpperDir": "/var/lib/docker/overlay2/473efb73f9bbf71c3f84444c41621d215a40c833c7c39aa6cac7d656220abd11/diff",
	                "WorkDir": "/var/lib/docker/overlay2/473efb73f9bbf71c3f84444c41621d215a40c833c7c39aa6cac7d656220abd11/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-063268",
	                "Source": "/var/lib/docker/volumes/pause-063268/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-063268",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-063268",
	                "name.minikube.sigs.k8s.io": "pause-063268",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "acb16375412b336ead898f1ba361363bdc3c70070303d5b39e40c45b703c4692",
	            "SandboxKey": "/var/run/docker/netns/acb16375412b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33323"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33324"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33327"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33325"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33326"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-063268": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:9f:82:1d:4e:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2dd6a5d24caa4467dcfd338fc6fe271087a3ce69a56c62026d391641d195417c",
	                    "EndpointID": "21781e7631eda6d21633e9647b8c53ba39fbc56696e637192d3a5a98f1f0c291",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-063268",
	                        "854628bf54d2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-063268 -n pause-063268
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-063268 -n pause-063268: exit status 2 (414.407074ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-063268 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-063268 logs -n 25: (2.230216541s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ list -p multinode-458368                                                                                         │ multinode-458368            │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ start   │ -p multinode-458368-m02 --driver=docker  --container-runtime=crio                                                │ multinode-458368-m02        │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │                     │
	│ start   │ -p multinode-458368-m03 --driver=docker  --container-runtime=crio                                                │ multinode-458368-m03        │ jenkins │ v1.37.0 │ 27 Dec 25 20:28 UTC │ 27 Dec 25 20:29 UTC │
	│ node    │ add -p multinode-458368                                                                                          │ multinode-458368            │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ delete  │ -p multinode-458368-m03                                                                                          │ multinode-458368-m03        │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ delete  │ -p multinode-458368                                                                                              │ multinode-458368            │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ start   │ -p scheduled-stop-363352 --memory=3072 --driver=docker  --container-runtime=crio                                 │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ stop    │ -p scheduled-stop-363352 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ stop    │ -p scheduled-stop-363352 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ stop    │ -p scheduled-stop-363352 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ stop    │ -p scheduled-stop-363352 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ stop    │ -p scheduled-stop-363352 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ stop    │ -p scheduled-stop-363352 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │                     │
	│ stop    │ -p scheduled-stop-363352 --cancel-scheduled                                                                      │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:29 UTC │ 27 Dec 25 20:29 UTC │
	│ stop    │ -p scheduled-stop-363352 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:30 UTC │                     │
	│ stop    │ -p scheduled-stop-363352 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:30 UTC │                     │
	│ stop    │ -p scheduled-stop-363352 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:30 UTC │ 27 Dec 25 20:30 UTC │
	│ delete  │ -p scheduled-stop-363352                                                                                         │ scheduled-stop-363352       │ jenkins │ v1.37.0 │ 27 Dec 25 20:31 UTC │ 27 Dec 25 20:31 UTC │
	│ start   │ -p insufficient-storage-170209 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio │ insufficient-storage-170209 │ jenkins │ v1.37.0 │ 27 Dec 25 20:31 UTC │                     │
	│ delete  │ -p insufficient-storage-170209                                                                                   │ insufficient-storage-170209 │ jenkins │ v1.37.0 │ 27 Dec 25 20:31 UTC │ 27 Dec 25 20:31 UTC │
	│ start   │ -p pause-063268 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio        │ pause-063268                │ jenkins │ v1.37.0 │ 27 Dec 25 20:31 UTC │ 27 Dec 25 20:32 UTC │
	│ start   │ -p missing-upgrade-655901 --memory=3072 --driver=docker  --container-runtime=crio                                │ missing-upgrade-655901      │ jenkins │ v1.35.0 │ 27 Dec 25 20:31 UTC │ 27 Dec 25 20:32 UTC │
	│ start   │ -p pause-063268 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ pause-063268                │ jenkins │ v1.37.0 │ 27 Dec 25 20:32 UTC │ 27 Dec 25 20:32 UTC │
	│ start   │ -p missing-upgrade-655901 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio         │ missing-upgrade-655901      │ jenkins │ v1.37.0 │ 27 Dec 25 20:32 UTC │                     │
	│ pause   │ -p pause-063268 --alsologtostderr -v=5                                                                           │ pause-063268                │ jenkins │ v1.37.0 │ 27 Dec 25 20:32 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:32:24
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:32:24.429107  416334 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:32:24.429327  416334 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:32:24.429355  416334 out.go:374] Setting ErrFile to fd 2...
	I1227 20:32:24.429375  416334 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:32:24.430428  416334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:32:24.431350  416334 out.go:368] Setting JSON to false
	I1227 20:32:24.432257  416334 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8097,"bootTime":1766859448,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:32:24.432351  416334 start.go:143] virtualization:  
	I1227 20:32:24.437438  416334 out.go:179] * [missing-upgrade-655901] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:32:24.443167  416334 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:32:24.443251  416334 notify.go:221] Checking for updates...
	I1227 20:32:24.446847  416334 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:32:24.452173  416334 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:32:24.455460  416334 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:32:24.458310  416334 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:32:24.461284  416334 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:32:24.465639  416334 config.go:182] Loaded profile config "missing-upgrade-655901": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1227 20:32:24.470371  416334 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I1227 20:32:24.473170  416334 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:32:24.508351  416334 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:32:24.508451  416334 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:32:24.595366  416334 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 20:32:24.582872576 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:32:24.595460  416334 docker.go:319] overlay module found
	I1227 20:32:24.598643  416334 out.go:179] * Using the docker driver based on existing profile
	I1227 20:32:24.601773  416334 start.go:309] selected driver: docker
	I1227 20:32:24.601798  416334 start.go:928] validating driver "docker" against &{Name:missing-upgrade-655901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-655901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:32:24.601893  416334 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:32:24.602543  416334 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:32:24.692946  416334 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 20:32:24.681993852 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:32:24.693258  416334 cni.go:84] Creating CNI manager for ""
	I1227 20:32:24.693314  416334 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:32:24.693348  416334 start.go:353] cluster config:
	{Name:missing-upgrade-655901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-655901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:32:24.696784  416334 out.go:179] * Starting "missing-upgrade-655901" primary control-plane node in "missing-upgrade-655901" cluster
	I1227 20:32:24.699456  416334 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:32:24.702535  416334 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:32:24.705269  416334 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1227 20:32:24.705309  416334 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:32:24.705318  416334 cache.go:65] Caching tarball of preloaded images
	I1227 20:32:24.705392  416334 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:32:24.705402  416334 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1227 20:32:24.705583  416334 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1227 20:32:24.705850  416334 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/missing-upgrade-655901/config.json ...
	I1227 20:32:24.736082  416334 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I1227 20:32:24.736102  416334 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I1227 20:32:24.736116  416334 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:32:24.736143  416334 start.go:360] acquireMachinesLock for missing-upgrade-655901: {Name:mk7162be709e759f3e9b267b63aa1c582dfe1fe9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:32:24.736194  416334 start.go:364] duration metric: took 31.277µs to acquireMachinesLock for "missing-upgrade-655901"
	I1227 20:32:24.736212  416334 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:32:24.736217  416334 fix.go:54] fixHost starting: 
	I1227 20:32:24.736481  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:24.761177  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	I1227 20:32:24.761241  416334 fix.go:112] recreateIfNeeded on missing-upgrade-655901: state= err=unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:24.761255  416334 fix.go:117] machineExists: false. err=machine does not exist
	I1227 20:32:24.764663  416334 out.go:179] * docker "missing-upgrade-655901" container is missing, will recreate.
	I1227 20:32:23.767001  415328 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:32:23.767021  415328 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:32:23.767076  415328 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:32:23.811582  415328 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:32:23.811662  415328 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:32:23.811685  415328 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 20:32:23.811808  415328 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-063268 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:pause-063268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:32:23.811939  415328 ssh_runner.go:195] Run: crio config
	I1227 20:32:23.917048  415328 cni.go:84] Creating CNI manager for ""
	I1227 20:32:23.917121  415328 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:32:23.917155  415328 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:32:23.917207  415328 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-063268 NodeName:pause-063268 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:32:23.917398  415328 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-063268"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:32:23.917520  415328 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:32:23.926745  415328 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:32:23.926903  415328 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:32:23.943128  415328 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1227 20:32:23.960616  415328 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:32:23.979001  415328 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1227 20:32:23.999508  415328 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:32:24.004357  415328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:32:24.216611  415328 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:32:24.251960  415328 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268 for IP: 192.168.76.2
	I1227 20:32:24.251984  415328 certs.go:195] generating shared ca certs ...
	I1227 20:32:24.252000  415328 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:32:24.252166  415328 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:32:24.252214  415328 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:32:24.252224  415328 certs.go:257] generating profile certs ...
	I1227 20:32:24.252315  415328 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/client.key
	I1227 20:32:24.252392  415328 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/apiserver.key.8a7ef7a8
	I1227 20:32:24.252439  415328 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/proxy-client.key
	I1227 20:32:24.252550  415328 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:32:24.252587  415328 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:32:24.252599  415328 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:32:24.252627  415328 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:32:24.252661  415328 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:32:24.252688  415328 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:32:24.252739  415328 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:32:24.253345  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:32:24.303965  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:32:24.325083  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:32:24.347104  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:32:24.411826  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1227 20:32:24.429805  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:32:24.448906  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:32:24.468089  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:32:24.493604  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:32:24.513858  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:32:24.537991  415328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:32:24.557344  415328 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:32:24.569908  415328 ssh_runner.go:195] Run: openssl version
	I1227 20:32:24.582793  415328 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:32:24.593629  415328 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:32:24.603282  415328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:32:24.609610  415328 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:32:24.609708  415328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:32:24.656002  415328 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:32:24.663562  415328 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:32:24.670798  415328 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:32:24.679560  415328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:32:24.686269  415328 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:32:24.686422  415328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:32:24.730677  415328 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:32:24.740273  415328 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:32:24.747820  415328 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:32:24.755688  415328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:32:24.760033  415328 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:32:24.760115  415328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:32:24.802544  415328 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:32:24.811373  415328 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:32:24.815648  415328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:32:24.862385  415328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:32:24.903603  415328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:32:24.944504  415328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:32:24.985152  415328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:32:25.025996  415328 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:32:25.067803  415328 kubeadm.go:401] StartCluster: {Name:pause-063268 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-063268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:32:25.068008  415328 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:32:25.068107  415328 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:32:25.097682  415328 cri.go:96] found id: "416558ce12be971f6a4b6e7ef366b6cd5749f709d73875a41b18c0ee365732fb"
	I1227 20:32:25.097702  415328 cri.go:96] found id: "b6029d6a282d2e2b9baf6745700225b920389457d9eaed47d3219d6a1698087b"
	I1227 20:32:25.097707  415328 cri.go:96] found id: "dc98608038347ea5a66daf1ae3446ca17649266730a5d13c8aa1465c8a6f3124"
	I1227 20:32:25.097710  415328 cri.go:96] found id: "9777b2b15db10899b07ba1594d186322030830e3fdb8ddbd2a0f20737d3d28c8"
	I1227 20:32:25.097713  415328 cri.go:96] found id: "d3397c067b48bfc38eea1e3595441ae62853459c7ca60afd179fd9d4a21ac34d"
	I1227 20:32:25.097716  415328 cri.go:96] found id: "9e51e3ee441e237964c680e42a957dc502b056f13d851926ff327daf2de4f7d4"
	I1227 20:32:25.097720  415328 cri.go:96] found id: "768d0620e2839f7eab59ee37f96feaa3ecf0b3a65150b795226874a68029be62"
	I1227 20:32:25.097723  415328 cri.go:96] found id: ""
	I1227 20:32:25.097776  415328 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:32:25.117146  415328 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:32:25Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:32:25.117221  415328 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:32:25.125258  415328 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:32:25.125278  415328 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:32:25.125331  415328 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:32:25.133508  415328 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:32:25.134220  415328 kubeconfig.go:125] found "pause-063268" server: "https://192.168.76.2:8443"
	I1227 20:32:25.135059  415328 kapi.go:59] client config for pause-063268: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 20:32:25.135587  415328 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 20:32:25.135605  415328 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 20:32:25.135612  415328 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 20:32:25.135616  415328 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 20:32:25.135621  415328 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 20:32:25.135632  415328 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 20:32:25.135926  415328 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:32:25.144713  415328 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 20:32:25.144745  415328 kubeadm.go:602] duration metric: took 19.460214ms to restartPrimaryControlPlane
	I1227 20:32:25.144755  415328 kubeadm.go:403] duration metric: took 76.975943ms to StartCluster
	I1227 20:32:25.144771  415328 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:32:25.144847  415328 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:32:25.145765  415328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:32:25.146002  415328 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:32:25.146263  415328 config.go:182] Loaded profile config "pause-063268": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:32:25.146319  415328 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:32:25.149597  415328 out.go:179] * Enabled addons: 
	I1227 20:32:25.149656  415328 out.go:179] * Verifying Kubernetes components...
	I1227 20:32:25.152484  415328 addons.go:530] duration metric: took 6.157468ms for enable addons: enabled=[]
	I1227 20:32:25.152535  415328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:32:25.295072  415328 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:32:25.307821  415328 node_ready.go:35] waiting up to 6m0s for node "pause-063268" to be "Ready" ...
	I1227 20:32:24.767472  416334 delete.go:124] DEMOLISHING missing-upgrade-655901 ...
	I1227 20:32:24.767574  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:24.786963  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	W1227 20:32:24.787017  416334 stop.go:83] unable to get state: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:24.787035  416334 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:24.787486  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:24.806871  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	I1227 20:32:24.806944  416334 delete.go:82] Unable to get host status for missing-upgrade-655901, assuming it has already been deleted: state: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:24.807001  416334 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-655901
	W1227 20:32:24.827609  416334 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-655901 returned with exit code 1
	I1227 20:32:24.827656  416334 kic.go:371] could not find the container missing-upgrade-655901 to remove it. will try anyways
	I1227 20:32:24.827719  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:24.844672  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	W1227 20:32:24.844751  416334 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:24.844822  416334 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-655901 /bin/bash -c "sudo init 0"
	W1227 20:32:24.862348  416334 cli_runner.go:211] docker exec --privileged -t missing-upgrade-655901 /bin/bash -c "sudo init 0" returned with exit code 1
	I1227 20:32:24.862406  416334 oci.go:659] error shutdown missing-upgrade-655901: docker exec --privileged -t missing-upgrade-655901 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:25.862667  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:25.879294  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	I1227 20:32:25.879356  416334 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:25.879385  416334 oci.go:673] temporary error: container missing-upgrade-655901 status is  but expect it to be exited
	I1227 20:32:25.879430  416334 retry.go:84] will retry after 500ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:26.339748  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:26.354064  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	I1227 20:32:26.354125  416334 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:26.354143  416334 oci.go:673] temporary error: container missing-upgrade-655901 status is  but expect it to be exited
	I1227 20:32:27.285339  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:27.306901  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	I1227 20:32:27.306959  416334 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:27.306979  416334 oci.go:673] temporary error: container missing-upgrade-655901 status is  but expect it to be exited
	I1227 20:32:28.387367  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:28.405640  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	I1227 20:32:28.405697  416334 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:28.405706  416334 oci.go:673] temporary error: container missing-upgrade-655901 status is  but expect it to be exited
	I1227 20:32:30.257734  415328 node_ready.go:49] node "pause-063268" is "Ready"
	I1227 20:32:30.257760  415328 node_ready.go:38] duration metric: took 4.949901637s for node "pause-063268" to be "Ready" ...
	I1227 20:32:30.257774  415328 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:32:30.257835  415328 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:32:30.275525  415328 api_server.go:72] duration metric: took 5.129493566s to wait for apiserver process to appear ...
	I1227 20:32:30.275549  415328 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:32:30.275569  415328 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:32:30.306377  415328 api_server.go:325] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 20:32:30.306448  415328 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 20:32:30.776649  415328 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:32:30.785864  415328 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:32:30.785934  415328 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:32:31.276199  415328 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:32:31.285250  415328 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:32:31.285289  415328 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:32:31.775859  415328 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:32:31.783951  415328 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 20:32:31.784963  415328 api_server.go:141] control plane version: v1.35.0
	I1227 20:32:31.784990  415328 api_server.go:131] duration metric: took 1.509434005s to wait for apiserver health ...
	I1227 20:32:31.785000  415328 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:32:31.788349  415328 system_pods.go:59] 7 kube-system pods found
	I1227 20:32:31.788386  415328 system_pods.go:61] "coredns-7d764666f9-c22b6" [6b763ed8-dbdc-44c4-a42a-5cd6236da10f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:32:31.788399  415328 system_pods.go:61] "etcd-pause-063268" [fad2b2af-e6b8-4b24-a260-9ced85b4bd27] Running
	I1227 20:32:31.788405  415328 system_pods.go:61] "kindnet-g5stk" [4e9e88b5-4c0a-4667-b384-b7d7d0a91df3] Running
	I1227 20:32:31.788410  415328 system_pods.go:61] "kube-apiserver-pause-063268" [1503e82e-99ff-429d-a01c-94fefae9f4da] Running
	I1227 20:32:31.788417  415328 system_pods.go:61] "kube-controller-manager-pause-063268" [2165ed05-52a6-42da-bb0c-4625fd414ee9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:32:31.788427  415328 system_pods.go:61] "kube-proxy-hkrgp" [d611fc25-9029-493d-9149-7d9ba7551fc1] Running
	I1227 20:32:31.788432  415328 system_pods.go:61] "kube-scheduler-pause-063268" [8d49927b-d593-40e4-9cac-7699fd215da1] Running
	I1227 20:32:31.788440  415328 system_pods.go:74] duration metric: took 3.434534ms to wait for pod list to return data ...
	I1227 20:32:31.788448  415328 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:32:31.791131  415328 default_sa.go:45] found service account: "default"
	I1227 20:32:31.791155  415328 default_sa.go:55] duration metric: took 2.692208ms for default service account to be created ...
	I1227 20:32:31.791166  415328 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:32:31.793708  415328 system_pods.go:86] 7 kube-system pods found
	I1227 20:32:31.793740  415328 system_pods.go:89] "coredns-7d764666f9-c22b6" [6b763ed8-dbdc-44c4-a42a-5cd6236da10f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:32:31.793747  415328 system_pods.go:89] "etcd-pause-063268" [fad2b2af-e6b8-4b24-a260-9ced85b4bd27] Running
	I1227 20:32:31.793753  415328 system_pods.go:89] "kindnet-g5stk" [4e9e88b5-4c0a-4667-b384-b7d7d0a91df3] Running
	I1227 20:32:31.793758  415328 system_pods.go:89] "kube-apiserver-pause-063268" [1503e82e-99ff-429d-a01c-94fefae9f4da] Running
	I1227 20:32:31.793765  415328 system_pods.go:89] "kube-controller-manager-pause-063268" [2165ed05-52a6-42da-bb0c-4625fd414ee9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:32:31.793771  415328 system_pods.go:89] "kube-proxy-hkrgp" [d611fc25-9029-493d-9149-7d9ba7551fc1] Running
	I1227 20:32:31.793779  415328 system_pods.go:89] "kube-scheduler-pause-063268" [8d49927b-d593-40e4-9cac-7699fd215da1] Running
	I1227 20:32:31.793787  415328 system_pods.go:126] duration metric: took 2.614424ms to wait for k8s-apps to be running ...
	I1227 20:32:31.793794  415328 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:32:31.793858  415328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:32:31.806632  415328 system_svc.go:56] duration metric: took 12.827919ms WaitForService to wait for kubelet
	I1227 20:32:31.806661  415328 kubeadm.go:587] duration metric: took 6.660634415s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:32:31.806680  415328 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:32:31.809627  415328 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:32:31.809656  415328 node_conditions.go:123] node cpu capacity is 2
	I1227 20:32:31.809669  415328 node_conditions.go:105] duration metric: took 2.966622ms to run NodePressure ...
	I1227 20:32:31.809682  415328 start.go:242] waiting for startup goroutines ...
	I1227 20:32:31.809690  415328 start.go:247] waiting for cluster config update ...
	I1227 20:32:31.809698  415328 start.go:256] writing updated cluster config ...
	I1227 20:32:31.809990  415328 ssh_runner.go:195] Run: rm -f paused
	I1227 20:32:31.813564  415328 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:32:31.814189  415328 kapi.go:59] client config for pause-063268: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/client.crt", KeyFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/profiles/pause-063268/client.key", CAFile:"/home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 20:32:31.816886  415328 pod_ready.go:83] waiting for pod "coredns-7d764666f9-c22b6" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:30.935179  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:30.950849  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	I1227 20:32:30.950909  416334 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:30.950919  416334 oci.go:673] temporary error: container missing-upgrade-655901 status is  but expect it to be exited
	W1227 20:32:33.822838  415328 pod_ready.go:104] pod "coredns-7d764666f9-c22b6" is not "Ready", error: <nil>
	W1227 20:32:35.822878  415328 pod_ready.go:104] pod "coredns-7d764666f9-c22b6" is not "Ready", error: <nil>
	W1227 20:32:38.321926  415328 pod_ready.go:104] pod "coredns-7d764666f9-c22b6" is not "Ready", error: <nil>
	I1227 20:32:34.585153  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:34.603255  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	I1227 20:32:34.603320  416334 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:34.603355  416334 oci.go:673] temporary error: container missing-upgrade-655901 status is  but expect it to be exited
	I1227 20:32:34.603402  416334 retry.go:84] will retry after 3.4s: couldn't verify container is exited. %v: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:37.975207  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:37.990200  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	I1227 20:32:37.990265  416334 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:37.990281  416334 oci.go:673] temporary error: container missing-upgrade-655901 status is  but expect it to be exited
	I1227 20:32:37.990314  416334 retry.go:84] will retry after 5.5s: couldn't verify container is exited. %v: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:39.322469  415328 pod_ready.go:94] pod "coredns-7d764666f9-c22b6" is "Ready"
	I1227 20:32:39.322498  415328 pod_ready.go:86] duration metric: took 7.5055902s for pod "coredns-7d764666f9-c22b6" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:39.325053  415328 pod_ready.go:83] waiting for pod "etcd-pause-063268" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:40.331029  415328 pod_ready.go:94] pod "etcd-pause-063268" is "Ready"
	I1227 20:32:40.331062  415328 pod_ready.go:86] duration metric: took 1.005981297s for pod "etcd-pause-063268" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:40.333633  415328 pod_ready.go:83] waiting for pod "kube-apiserver-pause-063268" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:40.338377  415328 pod_ready.go:94] pod "kube-apiserver-pause-063268" is "Ready"
	I1227 20:32:40.338405  415328 pod_ready.go:86] duration metric: took 4.743551ms for pod "kube-apiserver-pause-063268" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:40.340578  415328 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-063268" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:41.845812  415328 pod_ready.go:94] pod "kube-controller-manager-pause-063268" is "Ready"
	I1227 20:32:41.845837  415328 pod_ready.go:86] duration metric: took 1.505230684s for pod "kube-controller-manager-pause-063268" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:41.848389  415328 pod_ready.go:83] waiting for pod "kube-proxy-hkrgp" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:42.121661  415328 pod_ready.go:94] pod "kube-proxy-hkrgp" is "Ready"
	I1227 20:32:42.121691  415328 pod_ready.go:86] duration metric: took 273.27958ms for pod "kube-proxy-hkrgp" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:42.321518  415328 pod_ready.go:83] waiting for pod "kube-scheduler-pause-063268" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:42.720454  415328 pod_ready.go:94] pod "kube-scheduler-pause-063268" is "Ready"
	I1227 20:32:42.720487  415328 pod_ready.go:86] duration metric: took 398.935671ms for pod "kube-scheduler-pause-063268" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:32:42.720500  415328 pod_ready.go:40] duration metric: took 10.906898856s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:32:42.774516  415328 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 20:32:42.777647  415328 out.go:203] 
	W1227 20:32:42.780539  415328 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 20:32:42.783306  415328 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:32:42.786051  415328 out.go:179] * Done! kubectl is now configured to use "pause-063268" cluster and "default" namespace by default
	I1227 20:32:43.539322  416334 cli_runner.go:164] Run: docker container inspect missing-upgrade-655901 --format={{.State.Status}}
	W1227 20:32:43.555055  416334 cli_runner.go:211] docker container inspect missing-upgrade-655901 --format={{.State.Status}} returned with exit code 1
	I1227 20:32:43.555118  416334 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	I1227 20:32:43.555140  416334 oci.go:673] temporary error: container missing-upgrade-655901 status is  but expect it to be exited
	I1227 20:32:43.555174  416334 oci.go:88] couldn't shut down missing-upgrade-655901 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-655901": docker container inspect missing-upgrade-655901 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-655901
	 
	I1227 20:32:43.555235  416334 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-655901
	I1227 20:32:43.571009  416334 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-655901
	W1227 20:32:43.587635  416334 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-655901 returned with exit code 1
	I1227 20:32:43.587724  416334 cli_runner.go:164] Run: docker network inspect missing-upgrade-655901 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:32:43.603735  416334 cli_runner.go:164] Run: docker network rm missing-upgrade-655901
	I1227 20:32:43.708556  416334 fix.go:124] Sleeping 1 second for extra luck!
	I1227 20:32:44.709065  416334 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 27 20:32:26 pause-063268 crio[2086]: time="2025-12-27T20:32:26.984460987Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:32:27 pause-063268 crio[2086]: time="2025-12-27T20:32:27.02477657Z" level=info msg="Created container 2d5187c09e0d2282d9b217df137e91d2bde4c8df84c181dfc67dd7d051c343d1: kube-system/kube-controller-manager-pause-063268/kube-controller-manager" id=1d8cb867-5470-42df-8b3a-e953a28e5f3a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:32:27 pause-063268 crio[2086]: time="2025-12-27T20:32:27.025855274Z" level=info msg="Starting container: 2d5187c09e0d2282d9b217df137e91d2bde4c8df84c181dfc67dd7d051c343d1" id=9078e8ca-f7b0-4963-a64d-6555150a49f0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:32:27 pause-063268 crio[2086]: time="2025-12-27T20:32:27.029171943Z" level=info msg="Created container 46206dca1d54589a2dc7e056f6533461fa94f3ac29b80ee9ef2c43ecd0f219da: kube-system/kube-apiserver-pause-063268/kube-apiserver" id=53292ce7-4163-4beb-ab5f-b7f13fb408fe name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:32:27 pause-063268 crio[2086]: time="2025-12-27T20:32:27.030071682Z" level=info msg="Starting container: 46206dca1d54589a2dc7e056f6533461fa94f3ac29b80ee9ef2c43ecd0f219da" id=96a84d6a-6d1c-4a1a-adf7-522eec1ef57c name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:32:27 pause-063268 crio[2086]: time="2025-12-27T20:32:27.033021122Z" level=info msg="Started container" PID=2420 containerID=2d5187c09e0d2282d9b217df137e91d2bde4c8df84c181dfc67dd7d051c343d1 description=kube-system/kube-controller-manager-pause-063268/kube-controller-manager id=9078e8ca-f7b0-4963-a64d-6555150a49f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=81536374a07789b57d3e33b9b5d43c8793b24b7ca122b858b0f267dfc8ccfa53
	Dec 27 20:32:27 pause-063268 crio[2086]: time="2025-12-27T20:32:27.042775434Z" level=info msg="Created container 3dcb1acad15eb773b8e4d1bf48eff17bd636db39f14d8b4c2114369015e50c99: kube-system/etcd-pause-063268/etcd" id=a31e55b9-8649-43a5-81a4-a286dabc0ff4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:32:27 pause-063268 crio[2086]: time="2025-12-27T20:32:27.044217618Z" level=info msg="Started container" PID=2422 containerID=46206dca1d54589a2dc7e056f6533461fa94f3ac29b80ee9ef2c43ecd0f219da description=kube-system/kube-apiserver-pause-063268/kube-apiserver id=96a84d6a-6d1c-4a1a-adf7-522eec1ef57c name=/runtime.v1.RuntimeService/StartContainer sandboxID=470773e6287048246b0a0683cf448359686d58cffb1fee0e0e6cab89195cdd3f
	Dec 27 20:32:27 pause-063268 crio[2086]: time="2025-12-27T20:32:27.045100158Z" level=info msg="Starting container: 3dcb1acad15eb773b8e4d1bf48eff17bd636db39f14d8b4c2114369015e50c99" id=521f3aef-065c-42c6-89b5-d08448b9f5d8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:32:27 pause-063268 crio[2086]: time="2025-12-27T20:32:27.062931609Z" level=info msg="Started container" PID=2426 containerID=3dcb1acad15eb773b8e4d1bf48eff17bd636db39f14d8b4c2114369015e50c99 description=kube-system/etcd-pause-063268/etcd id=521f3aef-065c-42c6-89b5-d08448b9f5d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0caca91153ab937a97f873ecb14955ebbf04b37e5acc5dff24efebc8eddb6b8e
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.180538218Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.184771619Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.184804775Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.184827691Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.187860125Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.18789982Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.187923065Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.191004531Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.191035833Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.191057092Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.194691146Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.194722981Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.194745175Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.197904638Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:32:37 pause-063268 crio[2086]: time="2025-12-27T20:32:37.197951923Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	3dcb1acad15eb       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     22 seconds ago       Running             etcd                      1                   0caca91153ab9       etcd-pause-063268                      kube-system
	46206dca1d545       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     22 seconds ago       Running             kube-apiserver            1                   470773e628704       kube-apiserver-pause-063268            kube-system
	2d5187c09e0d2       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     22 seconds ago       Running             kube-controller-manager   1                   81536374a0778       kube-controller-manager-pause-063268   kube-system
	3abedd3ca1b67       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     22 seconds ago       Running             kube-scheduler            1                   35b7de4fd04a2       kube-scheduler-pause-063268            kube-system
	fa108279b1eb9       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     22 seconds ago       Running             coredns                   1                   7f0ade60dd2c0       coredns-7d764666f9-c22b6               kube-system
	8cac825e5dc8b       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     22 seconds ago       Running             kube-proxy                1                   02e1eb74c1bd4       kube-proxy-hkrgp                       kube-system
	7d35bd38f16e2       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     22 seconds ago       Running             kindnet-cni               1                   0edcd6415ee05       kindnet-g5stk                          kube-system
	416558ce12be9       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     38 seconds ago       Exited              coredns                   0                   7f0ade60dd2c0       coredns-7d764666f9-c22b6               kube-system
	b6029d6a282d2       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   49 seconds ago       Exited              kindnet-cni               0                   0edcd6415ee05       kindnet-g5stk                          kube-system
	dc98608038347       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     52 seconds ago       Exited              kube-proxy                0                   02e1eb74c1bd4       kube-proxy-hkrgp                       kube-system
	9777b2b15db10       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     About a minute ago   Exited              kube-scheduler            0                   35b7de4fd04a2       kube-scheduler-pause-063268            kube-system
	d3397c067b48b       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     About a minute ago   Exited              kube-controller-manager   0                   81536374a0778       kube-controller-manager-pause-063268   kube-system
	9e51e3ee441e2       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     About a minute ago   Exited              kube-apiserver            0                   470773e628704       kube-apiserver-pause-063268            kube-system
	768d0620e2839       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     About a minute ago   Exited              etcd                      0                   0caca91153ab9       etcd-pause-063268                      kube-system
	
	
	==> coredns [416558ce12be971f6a4b6e7ef366b6cd5749f709d73875a41b18c0ee365732fb] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:43128 - 25322 "HINFO IN 3771375815311659064.1043972883728246632. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021885539s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fa108279b1eb9ceb70588f2233e4945ac93e186f6bcf3c4ee7c054449cf0f75b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:47229 - 12593 "HINFO IN 4191501068697679840.7882993322500420963. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008049406s
	
	
	==> describe nodes <==
	Name:               pause-063268
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-063268
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=pause-063268
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_31_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:31:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-063268
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:32:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:32:33 +0000   Sat, 27 Dec 2025 20:31:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:32:33 +0000   Sat, 27 Dec 2025 20:31:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:32:33 +0000   Sat, 27 Dec 2025 20:31:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:32:33 +0000   Sat, 27 Dec 2025 20:32:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-063268
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                aa57afc7-b131-4907-b0d7-ed80e5a1309f
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-c22b6                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     53s
	  kube-system                 etcd-pause-063268                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         58s
	  kube-system                 kindnet-g5stk                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      53s
	  kube-system                 kube-apiserver-pause-063268             250m (12%)    0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-controller-manager-pause-063268    200m (10%)    0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-proxy-hkrgp                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 kube-scheduler-pause-063268             100m (5%)     0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  54s   node-controller  Node pause-063268 event: Registered Node pause-063268 in Controller
	  Normal  RegisteredNode  17s   node-controller  Node pause-063268 event: Registered Node pause-063268 in Controller
	
	
	==> dmesg <==
	[Dec27 20:04] overlayfs: idmapped layers are currently not supported
	[Dec27 20:05] overlayfs: idmapped layers are currently not supported
	[Dec27 20:06] overlayfs: idmapped layers are currently not supported
	[Dec27 20:07] overlayfs: idmapped layers are currently not supported
	[  +3.687478] overlayfs: idmapped layers are currently not supported
	[Dec27 20:15] overlayfs: idmapped layers are currently not supported
	[  +3.163851] overlayfs: idmapped layers are currently not supported
	[Dec27 20:16] overlayfs: idmapped layers are currently not supported
	[ +35.129102] overlayfs: idmapped layers are currently not supported
	[Dec27 20:17] overlayfs: idmapped layers are currently not supported
	[Dec27 20:19] overlayfs: idmapped layers are currently not supported
	[ +36.244108] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 20:22] overlayfs: idmapped layers are currently not supported
	[Dec27 20:23] overlayfs: idmapped layers are currently not supported
	[Dec27 20:24] overlayfs: idmapped layers are currently not supported
	[Dec27 20:25] overlayfs: idmapped layers are currently not supported
	[ +35.447549] overlayfs: idmapped layers are currently not supported
	[Dec27 20:26] overlayfs: idmapped layers are currently not supported
	[Dec27 20:27] overlayfs: idmapped layers are currently not supported
	[  +6.770645] overlayfs: idmapped layers are currently not supported
	[Dec27 20:28] overlayfs: idmapped layers are currently not supported
	[ +25.872751] overlayfs: idmapped layers are currently not supported
	[Dec27 20:29] overlayfs: idmapped layers are currently not supported
	[ +32.997137] overlayfs: idmapped layers are currently not supported
	[Dec27 20:31] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3dcb1acad15eb773b8e4d1bf48eff17bd636db39f14d8b4c2114369015e50c99] <==
	{"level":"info","ts":"2025-12-27T20:32:27.171291Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T20:32:27.171382Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T20:32:27.178370Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T20:32:27.182171Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T20:32:27.182507Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T20:32:27.181769Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T20:32:27.185804Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T20:32:27.649487Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:32:27.649543Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:32:27.649583Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:32:27.649594Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:32:27.649610Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:32:27.657499Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:32:27.657537Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:32:27.657552Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:32:27.657560Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:32:27.661620Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-063268 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:32:27.661660Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:32:27.661881Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:32:27.662782Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:32:27.664667Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:32:27.675279Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:32:27.675389Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:32:27.676191Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:32:27.683724Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> etcd [768d0620e2839f7eab59ee37f96feaa3ecf0b3a65150b795226874a68029be62] <==
	{"level":"info","ts":"2025-12-27T20:31:46.721721Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:31:46.721803Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:31:46.711212Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:31:46.721925Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T20:31:46.722020Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T20:31:46.726517Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T20:31:46.727045Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:32:15.949166Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-27T20:32:15.949335Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-063268","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-12-27T20:32:15.949421Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-27T20:32:16.113426Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-27T20:32:16.113547Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T20:32:16.113644Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"warn","ts":"2025-12-27T20:32:16.113646Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-27T20:32:16.113715Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-27T20:32:16.113754Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-27T20:32:16.113724Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-27T20:32:16.113837Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-27T20:32:16.113761Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-27T20:32:16.113928Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T20:32:16.113823Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-27T20:32:16.117144Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-12-27T20:32:16.117292Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T20:32:16.117353Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T20:32:16.117387Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-063268","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 20:32:50 up  2:15,  0 user,  load average: 2.41, 1.97, 1.81
	Linux pause-063268 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7d35bd38f16e24300670b1d7b0a1bb5c51e54ae70ff336a27ea418629779fd43] <==
	I1227 20:32:26.936541       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:32:26.937800       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 20:32:26.937944       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:32:26.937956       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:32:26.937966       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:32:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:32:27.180198       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:32:27.180286       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:32:27.180319       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:32:27.181801       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 20:32:27.229856       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1227 20:32:27.230079       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 20:32:27.230138       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 20:32:27.230291       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1227 20:32:30.381687       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:32:30.381732       1 metrics.go:72] Registering metrics
	I1227 20:32:30.381781       1 controller.go:711] "Syncing nftables rules"
	I1227 20:32:37.180082       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:32:37.180160       1 main.go:301] handling current node
	I1227 20:32:47.180314       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:32:47.180365       1 main.go:301] handling current node
	
	
	==> kindnet [b6029d6a282d2e2b9baf6745700225b920389457d9eaed47d3219d6a1698087b] <==
	I1227 20:32:00.640221       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:32:00.730113       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 20:32:00.730806       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:32:00.731630       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:32:00.732337       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:32:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:32:00.930138       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:32:00.930236       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:32:00.930271       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:32:00.931524       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 20:32:01.133526       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:32:01.133623       1 metrics.go:72] Registering metrics
	I1227 20:32:01.133726       1 controller.go:711] "Syncing nftables rules"
	I1227 20:32:10.845594       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:32:10.845657       1 main.go:301] handling current node
	
	
	==> kube-apiserver [46206dca1d54589a2dc7e056f6533461fa94f3ac29b80ee9ef2c43ecd0f219da] <==
	I1227 20:32:30.414284       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:32:30.426751       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:32:30.439638       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 20:32:30.440068       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 20:32:30.440351       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:32:30.444006       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 20:32:30.444063       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 20:32:30.444140       1 aggregator.go:187] initial CRD sync complete...
	I1227 20:32:30.444157       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 20:32:30.444165       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:32:30.444170       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:32:30.446333       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:30.446490       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:30.446567       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 20:32:30.446609       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 20:32:30.446747       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:32:30.447761       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 20:32:30.453769       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1227 20:32:30.463301       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 20:32:31.152391       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:32:32.241823       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:32:33.692743       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:32:33.744868       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:32:33.842885       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:32:33.945678       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [9e51e3ee441e237964c680e42a957dc502b056f13d851926ff327daf2de4f7d4] <==
	W1227 20:32:15.999083       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999108       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999134       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999165       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999187       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999211       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999238       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999263       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999288       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999314       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999341       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999365       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999395       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999423       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999451       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999478       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999505       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999533       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999561       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999589       1 logging.go:55] [core] [Channel #26 SubChannel #28]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999616       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999647       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999677       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999878       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 20:32:15.999911       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [2d5187c09e0d2282d9b217df137e91d2bde4c8df84c181dfc67dd7d051c343d1] <==
	I1227 20:32:33.453080       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.453162       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.453231       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.453287       1 range_allocator.go:177] "Sending events to api server"
	I1227 20:32:33.453817       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.454916       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.456752       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 20:32:33.456785       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:32:33.456791       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.457043       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.457206       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.458319       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.458332       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.461598       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.461816       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.462023       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.462207       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.462566       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.464559       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:32:33.482872       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.558860       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.558885       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:32:33.558891       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:32:33.565113       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:33.695696       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-controller-manager [d3397c067b48bfc38eea1e3595441ae62853459c7ca60afd179fd9d4a21ac34d] <==
	I1227 20:31:56.097132       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.097261       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.097399       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.097426       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.097440       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.099445       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.100395       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.123970       1 range_allocator.go:177] "Sending events to api server"
	I1227 20:31:56.124010       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 20:31:56.124025       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:31:56.124032       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.100407       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.100420       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.100489       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.100497       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.124492       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 20:31:56.124565       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-063268"
	I1227 20:31:56.124613       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 20:31:56.101222       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.110828       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:31:56.178412       1 range_allocator.go:433] "Set node PodCIDR" node="pause-063268" podCIDRs=["10.244.0.0/24"]
	I1227 20:31:56.211578       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:56.211611       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:31:56.211617       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:31:56.224884       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [8cac825e5dc8b16adf125824bbe9d9e4396548dcbe927b2c4b11f08cff8dfa4d] <==
	I1227 20:32:27.313081       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:32:28.011140       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:32:30.431878       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:30.442094       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 20:32:30.457464       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:32:30.595234       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:32:30.595291       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:32:30.604032       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:32:30.604376       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:32:30.604551       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:32:30.606147       1 config.go:200] "Starting service config controller"
	I1227 20:32:30.606213       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:32:30.606257       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:32:30.606301       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:32:30.606339       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:32:30.606373       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:32:30.608741       1 config.go:309] "Starting node config controller"
	I1227 20:32:30.609648       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:32:30.609704       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:32:30.706928       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 20:32:30.707049       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:32:30.707128       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [dc98608038347ea5a66daf1ae3446ca17649266730a5d13c8aa1465c8a6f3124] <==
	I1227 20:31:57.784927       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:31:57.924255       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:31:58.024643       1 shared_informer.go:377] "Caches are synced"
	I1227 20:31:58.024778       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 20:31:58.024939       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:31:58.049093       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:31:58.049219       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:31:58.068636       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:31:58.069064       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:31:58.069088       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:31:58.078584       1 config.go:200] "Starting service config controller"
	I1227 20:31:58.079653       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:31:58.079702       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:31:58.079740       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:31:58.079888       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:31:58.079917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:31:58.086826       1 config.go:309] "Starting node config controller"
	I1227 20:31:58.086924       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:31:58.086958       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:31:58.180270       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:31:58.180381       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 20:31:58.180620       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3abedd3ca1b6781d775413bebb693eb5e728039da24f46af35b97da37204edf6] <==
	I1227 20:32:27.282947       1 serving.go:386] Generated self-signed cert in-memory
	W1227 20:32:30.230154       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 20:32:30.230256       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 20:32:30.230293       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 20:32:30.230325       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 20:32:30.461184       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 20:32:30.461271       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:32:30.470117       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:32:30.470219       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:32:30.471061       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 20:32:30.471133       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 20:32:30.570706       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [9777b2b15db10899b07ba1594d186322030830e3fdb8ddbd2a0f20737d3d28c8] <==
	E1227 20:31:49.547163       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:31:49.547247       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 20:31:49.547295       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:31:49.547445       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:31:49.547546       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 20:31:49.547637       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:31:49.547729       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:31:49.547818       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:31:49.547917       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 20:31:49.549498       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:31:49.549616       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:31:50.381537       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 20:31:50.505974       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:31:50.521405       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 20:31:50.579394       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:31:50.622871       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:31:50.673689       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:31:50.688828       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	I1227 20:31:52.541566       1 shared_informer.go:377] "Caches are synced"
	I1227 20:32:15.947748       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1227 20:32:15.947783       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1227 20:32:15.947802       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1227 20:32:15.947882       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:32:15.948105       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1227 20:32:15.948155       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 27 20:32:29 pause-063268 kubelet[1300]: E1227 20:32:29.984644    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-063268" containerName="kube-apiserver"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.126777    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-063268" containerName="etcd"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.211151    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-063268\" is forbidden: User \"system:node:pause-063268\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-063268' and this object" podUID="2c9686bd3c253dec5368eafec73fd79b" pod="kube-system/kube-controller-manager-pause-063268"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.217744    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-apiserver-pause-063268\" is forbidden: User \"system:node:pause-063268\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-063268' and this object" podUID="3893ee8cfec715dd431272f8c3f7e562" pod="kube-system/kube-apiserver-pause-063268"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.219296    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-pause-063268\" is forbidden: User \"system:node:pause-063268\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-063268' and this object" podUID="a926e6cfc4479a30b2a8bdacb35d5de1" pod="kube-system/kube-scheduler-pause-063268"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.220624    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"etcd-pause-063268\" is forbidden: User \"system:node:pause-063268\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-063268' and this object" podUID="86ccf717b08ffd220d7c6edcb0a83cad" pod="kube-system/etcd-pause-063268"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.222539    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-proxy-hkrgp\" is forbidden: User \"system:node:pause-063268\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-063268' and this object" podUID="d611fc25-9029-493d-9149-7d9ba7551fc1" pod="kube-system/kube-proxy-hkrgp"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.230538    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"kindnet-g5stk\" is forbidden: User \"system:node:pause-063268\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-063268' and this object" podUID="4e9e88b5-4c0a-4667-b384-b7d7d0a91df3" pod="kube-system/kindnet-g5stk"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.239920    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"coredns-7d764666f9-c22b6\" is forbidden: User \"system:node:pause-063268\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-063268' and this object" podUID="6b763ed8-dbdc-44c4-a42a-5cd6236da10f" pod="kube-system/coredns-7d764666f9-c22b6"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.257294    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-apiserver-pause-063268\" is forbidden: User \"system:node:pause-063268\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-063268' and this object" podUID="3893ee8cfec715dd431272f8c3f7e562" pod="kube-system/kube-apiserver-pause-063268"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.308529    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-pause-063268\" is forbidden: User \"system:node:pause-063268\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-063268' and this object" podUID="a926e6cfc4479a30b2a8bdacb35d5de1" pod="kube-system/kube-scheduler-pause-063268"
	Dec 27 20:32:30 pause-063268 kubelet[1300]: E1227 20:32:30.336864    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"etcd-pause-063268\" is forbidden: User \"system:node:pause-063268\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-063268' and this object" podUID="86ccf717b08ffd220d7c6edcb0a83cad" pod="kube-system/etcd-pause-063268"
	Dec 27 20:32:31 pause-063268 kubelet[1300]: E1227 20:32:31.419580    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-063268" containerName="kube-controller-manager"
	Dec 27 20:32:32 pause-063268 kubelet[1300]: W1227 20:32:32.814430    1300 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 27 20:32:38 pause-063268 kubelet[1300]: E1227 20:32:38.983190    1300 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-c22b6" containerName="coredns"
	Dec 27 20:32:39 pause-063268 kubelet[1300]: E1227 20:32:39.399714    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-063268" containerName="kube-scheduler"
	Dec 27 20:32:39 pause-063268 kubelet[1300]: E1227 20:32:39.828208    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-063268" containerName="kube-apiserver"
	Dec 27 20:32:40 pause-063268 kubelet[1300]: E1227 20:32:40.006691    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-063268" containerName="kube-apiserver"
	Dec 27 20:32:40 pause-063268 kubelet[1300]: E1227 20:32:40.006985    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-063268" containerName="kube-scheduler"
	Dec 27 20:32:40 pause-063268 kubelet[1300]: E1227 20:32:40.128369    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-063268" containerName="etcd"
	Dec 27 20:32:41 pause-063268 kubelet[1300]: E1227 20:32:41.009006    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-063268" containerName="etcd"
	Dec 27 20:32:41 pause-063268 kubelet[1300]: E1227 20:32:41.428226    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-063268" containerName="kube-controller-manager"
	Dec 27 20:32:43 pause-063268 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:32:43 pause-063268 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:32:43 pause-063268 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-063268 -n pause-063268
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-063268 -n pause-063268: exit status 2 (838.090235ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-063268 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (9.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-855707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-855707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (241.635246ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:50:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-855707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-855707 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-855707 describe deploy/metrics-server -n kube-system: exit status 1 (79.073727ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-855707 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-855707
helpers_test.go:244: (dbg) docker inspect old-k8s-version-855707:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2",
	        "Created": "2025-12-27T20:49:49.083982112Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 480749,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:49:49.143860066Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2/hostname",
	        "HostsPath": "/var/lib/docker/containers/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2/hosts",
	        "LogPath": "/var/lib/docker/containers/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2-json.log",
	        "Name": "/old-k8s-version-855707",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-855707:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-855707",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2",
	                "LowerDir": "/var/lib/docker/overlay2/4cfbcea77308f009a9856e9df5c3a29b9bfd669c592158a020d3e7751dc1e39a-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4cfbcea77308f009a9856e9df5c3a29b9bfd669c592158a020d3e7751dc1e39a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4cfbcea77308f009a9856e9df5c3a29b9bfd669c592158a020d3e7751dc1e39a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4cfbcea77308f009a9856e9df5c3a29b9bfd669c592158a020d3e7751dc1e39a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-855707",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-855707/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-855707",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-855707",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-855707",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "13ecdf91ec39b19492ed3438b335268eb771770f6f47a94f31443db72b1e13b4",
	            "SandboxKey": "/var/run/docker/netns/13ecdf91ec39",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33408"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33409"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33412"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33410"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33411"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-855707": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:f0:e4:c7:7e:14",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "30f18a4a5fe47b52fd514e9e7c68df45288c84a3f84ad77d2d2746ff085abb75",
	                    "EndpointID": "4fd67c3601d8beef669d4bd8c11510afcf934735f2baef3d169c0b7dd09d0ad7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-855707",
	                        "ffdc66f60c1f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-855707 -n old-k8s-version-855707
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-855707 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-855707 logs -n 25: (1.137045571s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-037975 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo containerd config dump                                                                                                                                                                                                  │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo crio config                                                                                                                                                                                                             │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ delete  │ -p cilium-037975                                                                                                                                                                                                                              │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │ 27 Dec 25 20:45 UTC │
	│ start   │ -p cert-expiration-629954 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-629954    │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │ 27 Dec 25 20:45 UTC │
	│ start   │ -p cert-expiration-629954 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-629954    │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │ 27 Dec 25 20:48 UTC │
	│ delete  │ -p cert-expiration-629954                                                                                                                                                                                                                     │ cert-expiration-629954    │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │ 27 Dec 25 20:48 UTC │
	│ start   │ -p force-systemd-flag-604544 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-604544 │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │                     │
	│ delete  │ -p force-systemd-env-859716                                                                                                                                                                                                                   │ force-systemd-env-859716  │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ start   │ -p cert-options-765175 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-765175       │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ ssh     │ cert-options-765175 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-765175       │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ ssh     │ -p cert-options-765175 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-765175       │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ delete  │ -p cert-options-765175                                                                                                                                                                                                                        │ cert-options-765175       │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ start   │ -p old-k8s-version-855707 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-855707    │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-855707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-855707    │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:49:43
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:49:43.075429  480316 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:49:43.075547  480316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:49:43.075559  480316 out.go:374] Setting ErrFile to fd 2...
	I1227 20:49:43.075564  480316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:49:43.075821  480316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:49:43.076228  480316 out.go:368] Setting JSON to false
	I1227 20:49:43.077093  480316 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9135,"bootTime":1766859448,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:49:43.077167  480316 start.go:143] virtualization:  
	I1227 20:49:43.080929  480316 out.go:179] * [old-k8s-version-855707] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:49:43.085626  480316 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:49:43.085729  480316 notify.go:221] Checking for updates...
	I1227 20:49:43.092230  480316 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:49:43.095617  480316 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:49:43.098831  480316 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:49:43.102026  480316 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:49:43.105172  480316 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:49:43.108858  480316 config.go:182] Loaded profile config "force-systemd-flag-604544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:49:43.109005  480316 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:49:43.136219  480316 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:49:43.136329  480316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:49:43.197843  480316 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:49:43.189122859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:49:43.197938  480316 docker.go:319] overlay module found
	I1227 20:49:43.201321  480316 out.go:179] * Using the docker driver based on user configuration
	I1227 20:49:43.204337  480316 start.go:309] selected driver: docker
	I1227 20:49:43.204360  480316 start.go:928] validating driver "docker" against <nil>
	I1227 20:49:43.204387  480316 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:49:43.205095  480316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:49:43.265265  480316 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:49:43.255770502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:49:43.265407  480316 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 20:49:43.265686  480316 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:49:43.268725  480316 out.go:179] * Using Docker driver with root privileges
	I1227 20:49:43.271691  480316 cni.go:84] Creating CNI manager for ""
	I1227 20:49:43.272053  480316 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:49:43.272072  480316 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 20:49:43.272313  480316 start.go:353] cluster config:
	{Name:old-k8s-version-855707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-855707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:49:43.275463  480316 out.go:179] * Starting "old-k8s-version-855707" primary control-plane node in "old-k8s-version-855707" cluster
	I1227 20:49:43.278257  480316 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:49:43.281199  480316 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:49:43.283989  480316 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 20:49:43.284052  480316 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:49:43.284076  480316 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:49:43.284082  480316 cache.go:65] Caching tarball of preloaded images
	I1227 20:49:43.284190  480316 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:49:43.284201  480316 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1227 20:49:43.284329  480316 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/config.json ...
	I1227 20:49:43.284355  480316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/config.json: {Name:mk90e9b5bd1eafe5174295980844c3092c3c6e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:49:43.303579  480316 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:49:43.303606  480316 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:49:43.303621  480316 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:49:43.303651  480316 start.go:360] acquireMachinesLock for old-k8s-version-855707: {Name:mk772100ba05b793472926b85f6f775654e62c2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:49:43.303758  480316 start.go:364] duration metric: took 87.85µs to acquireMachinesLock for "old-k8s-version-855707"
	I1227 20:49:43.303787  480316 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-855707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-855707 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:49:43.303868  480316 start.go:125] createHost starting for "" (driver="docker")
	I1227 20:49:43.307301  480316 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:49:43.307543  480316 start.go:159] libmachine.API.Create for "old-k8s-version-855707" (driver="docker")
	I1227 20:49:43.307581  480316 client.go:173] LocalClient.Create starting
	I1227 20:49:43.307644  480316 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem
	I1227 20:49:43.307684  480316 main.go:144] libmachine: Decoding PEM data...
	I1227 20:49:43.307705  480316 main.go:144] libmachine: Parsing certificate...
	I1227 20:49:43.307761  480316 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem
	I1227 20:49:43.307783  480316 main.go:144] libmachine: Decoding PEM data...
	I1227 20:49:43.307795  480316 main.go:144] libmachine: Parsing certificate...
	I1227 20:49:43.308162  480316 cli_runner.go:164] Run: docker network inspect old-k8s-version-855707 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:49:43.324027  480316 cli_runner.go:211] docker network inspect old-k8s-version-855707 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:49:43.324132  480316 network_create.go:284] running [docker network inspect old-k8s-version-855707] to gather additional debugging logs...
	I1227 20:49:43.324149  480316 cli_runner.go:164] Run: docker network inspect old-k8s-version-855707
	W1227 20:49:43.342029  480316 cli_runner.go:211] docker network inspect old-k8s-version-855707 returned with exit code 1
	I1227 20:49:43.342055  480316 network_create.go:287] error running [docker network inspect old-k8s-version-855707]: docker network inspect old-k8s-version-855707: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-855707 not found
	I1227 20:49:43.342089  480316 network_create.go:289] output of [docker network inspect old-k8s-version-855707]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-855707 not found
	
	** /stderr **
	I1227 20:49:43.342198  480316 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:49:43.361841  480316 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9521cb9225c5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:1d:ef:38:b7:a6} reservation:<nil>}
	I1227 20:49:43.362237  480316 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-68d11cc2ab47 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:8d:ad:37:cb:fe} reservation:<nil>}
	I1227 20:49:43.362479  480316 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d3b7cfff4895 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:4a:e3:08:10:2f} reservation:<nil>}
	I1227 20:49:43.362938  480316 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a18fc0}
	I1227 20:49:43.362966  480316 network_create.go:124] attempt to create docker network old-k8s-version-855707 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 20:49:43.363022  480316 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-855707 old-k8s-version-855707
	I1227 20:49:43.422287  480316 network_create.go:108] docker network old-k8s-version-855707 192.168.76.0/24 created
	I1227 20:49:43.422317  480316 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-855707" container
	I1227 20:49:43.422404  480316 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:49:43.438402  480316 cli_runner.go:164] Run: docker volume create old-k8s-version-855707 --label name.minikube.sigs.k8s.io=old-k8s-version-855707 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:49:43.455683  480316 oci.go:103] Successfully created a docker volume old-k8s-version-855707
	I1227 20:49:43.455780  480316 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-855707-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-855707 --entrypoint /usr/bin/test -v old-k8s-version-855707:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:49:43.977110  480316 oci.go:107] Successfully prepared a docker volume old-k8s-version-855707
	I1227 20:49:43.977181  480316 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 20:49:43.977195  480316 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 20:49:43.977272  480316 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-855707:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 20:49:49.020765  480316 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-855707:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (5.043430902s)
	I1227 20:49:49.020801  480316 kic.go:203] duration metric: took 5.043601596s to extract preloaded images to volume ...
	W1227 20:49:49.020936  480316 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 20:49:49.021042  480316 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:49:49.069899  480316 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-855707 --name old-k8s-version-855707 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-855707 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-855707 --network old-k8s-version-855707 --ip 192.168.76.2 --volume old-k8s-version-855707:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:49:49.329849  480316 cli_runner.go:164] Run: docker container inspect old-k8s-version-855707 --format={{.State.Running}}
	I1227 20:49:49.350410  480316 cli_runner.go:164] Run: docker container inspect old-k8s-version-855707 --format={{.State.Status}}
	I1227 20:49:49.382285  480316 cli_runner.go:164] Run: docker exec old-k8s-version-855707 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:49:49.427967  480316 oci.go:144] the created container "old-k8s-version-855707" has a running status.
	I1227 20:49:49.427994  480316 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa...
	I1227 20:49:49.567721  480316 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:49:49.592669  480316 cli_runner.go:164] Run: docker container inspect old-k8s-version-855707 --format={{.State.Status}}
	I1227 20:49:49.610041  480316 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:49:49.610065  480316 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-855707 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:49:49.660014  480316 cli_runner.go:164] Run: docker container inspect old-k8s-version-855707 --format={{.State.Status}}
	I1227 20:49:49.680462  480316 machine.go:94] provisionDockerMachine start ...
	I1227 20:49:49.680562  480316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:49:49.705739  480316 main.go:144] libmachine: Using SSH client type: native
	I1227 20:49:49.706086  480316 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1227 20:49:49.706102  480316 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:49:49.706747  480316 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48598->127.0.0.1:33408: read: connection reset by peer
	I1227 20:49:52.849185  480316 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-855707
	
	I1227 20:49:52.849212  480316 ubuntu.go:182] provisioning hostname "old-k8s-version-855707"
	I1227 20:49:52.849276  480316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:49:52.866598  480316 main.go:144] libmachine: Using SSH client type: native
	I1227 20:49:52.866927  480316 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1227 20:49:52.866944  480316 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-855707 && echo "old-k8s-version-855707" | sudo tee /etc/hostname
	I1227 20:49:53.015937  480316 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-855707
	
	I1227 20:49:53.016026  480316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:49:53.034525  480316 main.go:144] libmachine: Using SSH client type: native
	I1227 20:49:53.034843  480316 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1227 20:49:53.034866  480316 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-855707' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-855707/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-855707' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:49:53.173766  480316 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:49:53.173801  480316 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:49:53.173820  480316 ubuntu.go:190] setting up certificates
	I1227 20:49:53.173830  480316 provision.go:84] configureAuth start
	I1227 20:49:53.173905  480316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-855707
	I1227 20:49:53.191026  480316 provision.go:143] copyHostCerts
	I1227 20:49:53.191090  480316 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:49:53.191103  480316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:49:53.191179  480316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:49:53.191286  480316 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:49:53.191297  480316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:49:53.191324  480316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:49:53.191385  480316 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:49:53.191396  480316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:49:53.191421  480316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:49:53.191472  480316 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-855707 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-855707]
	I1227 20:49:53.315886  480316 provision.go:177] copyRemoteCerts
	I1227 20:49:53.315953  480316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:49:53.315995  480316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:49:53.332482  480316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:49:53.433308  480316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1227 20:49:53.450907  480316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:49:53.468455  480316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:49:53.486023  480316 provision.go:87] duration metric: took 312.170936ms to configureAuth
	I1227 20:49:53.486054  480316 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:49:53.486274  480316 config.go:182] Loaded profile config "old-k8s-version-855707": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 20:49:53.486387  480316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:49:53.502410  480316 main.go:144] libmachine: Using SSH client type: native
	I1227 20:49:53.502714  480316 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1227 20:49:53.502734  480316 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:49:53.793708  480316 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:49:53.793733  480316 machine.go:97] duration metric: took 4.113248254s to provisionDockerMachine
	I1227 20:49:53.793743  480316 client.go:176] duration metric: took 10.486151721s to LocalClient.Create
	I1227 20:49:53.793792  480316 start.go:167] duration metric: took 10.486216383s to libmachine.API.Create "old-k8s-version-855707"
	I1227 20:49:53.793809  480316 start.go:293] postStartSetup for "old-k8s-version-855707" (driver="docker")
	I1227 20:49:53.793819  480316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:49:53.793906  480316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:49:53.793971  480316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:49:53.810323  480316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:49:53.913325  480316 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:49:53.916413  480316 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:49:53.916483  480316 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:49:53.916503  480316 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:49:53.916569  480316 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:49:53.916664  480316 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:49:53.916770  480316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:49:53.924251  480316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:49:53.940874  480316 start.go:296] duration metric: took 147.049578ms for postStartSetup
	I1227 20:49:53.941267  480316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-855707
	I1227 20:49:53.957823  480316 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/config.json ...
	I1227 20:49:53.958102  480316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:49:53.958145  480316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:49:53.981612  480316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:49:54.086671  480316 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:49:54.091581  480316 start.go:128] duration metric: took 10.787695791s to createHost
	I1227 20:49:54.091609  480316 start.go:83] releasing machines lock for "old-k8s-version-855707", held for 10.787836628s
	I1227 20:49:54.091688  480316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-855707
	I1227 20:49:54.110615  480316 ssh_runner.go:195] Run: cat /version.json
	I1227 20:49:54.110671  480316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:49:54.110625  480316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:49:54.110757  480316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:49:54.131022  480316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:49:54.133727  480316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:49:54.229256  480316 ssh_runner.go:195] Run: systemctl --version
	I1227 20:49:54.327774  480316 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:49:54.366009  480316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:49:54.370438  480316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:49:54.370513  480316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:49:54.398179  480316 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 20:49:54.398247  480316 start.go:496] detecting cgroup driver to use...
	I1227 20:49:54.398288  480316 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:49:54.398360  480316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:49:54.416742  480316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:49:54.429006  480316 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:49:54.429078  480316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:49:54.447303  480316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:49:54.465991  480316 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:49:54.591556  480316 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:49:54.731319  480316 docker.go:234] disabling docker service ...
	I1227 20:49:54.731414  480316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:49:54.755437  480316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:49:54.778742  480316 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:49:54.901004  480316 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:49:55.033856  480316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:49:55.048493  480316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:49:55.062870  480316 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1227 20:49:55.063033  480316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:49:55.071855  480316 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:49:55.071982  480316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:49:55.081032  480316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:49:55.090220  480316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:49:55.099731  480316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:49:55.108220  480316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:49:55.117577  480316 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:49:55.131449  480316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:49:55.140828  480316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:49:55.148517  480316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:49:55.155775  480316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:49:55.268531  480316 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:49:55.470312  480316 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:49:55.470384  480316 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:49:55.474588  480316 start.go:574] Will wait 60s for crictl version
	I1227 20:49:55.474656  480316 ssh_runner.go:195] Run: which crictl
	I1227 20:49:55.479217  480316 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:49:55.507074  480316 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:49:55.507203  480316 ssh_runner.go:195] Run: crio --version
	I1227 20:49:55.539347  480316 ssh_runner.go:195] Run: crio --version
	I1227 20:49:55.571607  480316 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1227 20:49:55.574526  480316 cli_runner.go:164] Run: docker network inspect old-k8s-version-855707 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:49:55.591832  480316 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 20:49:55.595765  480316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:49:55.605996  480316 kubeadm.go:884] updating cluster {Name:old-k8s-version-855707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-855707 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:49:55.606111  480316 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 20:49:55.606179  480316 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:49:55.637984  480316 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:49:55.638008  480316 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:49:55.638068  480316 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:49:55.666307  480316 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:49:55.666331  480316 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:49:55.666339  480316 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1227 20:49:55.666421  480316 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-855707 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-855707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:49:55.666502  480316 ssh_runner.go:195] Run: crio config
	I1227 20:49:55.718017  480316 cni.go:84] Creating CNI manager for ""
	I1227 20:49:55.718040  480316 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:49:55.718057  480316 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:49:55.718101  480316 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-855707 NodeName:old-k8s-version-855707 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:49:55.718262  480316 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-855707"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:49:55.718341  480316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1227 20:49:55.726226  480316 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:49:55.726349  480316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:49:55.733887  480316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1227 20:49:55.746923  480316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:49:55.760729  480316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1227 20:49:55.773061  480316 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:49:55.776665  480316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:49:55.786241  480316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:49:55.909812  480316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:49:55.926031  480316 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707 for IP: 192.168.76.2
	I1227 20:49:55.926053  480316 certs.go:195] generating shared ca certs ...
	I1227 20:49:55.926105  480316 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:49:55.926253  480316 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:49:55.926310  480316 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:49:55.926324  480316 certs.go:257] generating profile certs ...
	I1227 20:49:55.926381  480316 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.key
	I1227 20:49:55.926405  480316 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.crt with IP's: []
	I1227 20:49:56.021959  480316 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.crt ...
	I1227 20:49:56.021994  480316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.crt: {Name:mk2433b39fdf007097e7942075ca8be37ede0fb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:49:56.022229  480316 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.key ...
	I1227 20:49:56.022248  480316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.key: {Name:mk547242545f93b6fa90bd9d8fff43bc5e363b36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:49:56.022357  480316 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/apiserver.key.cdba09ac
	I1227 20:49:56.022377  480316 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/apiserver.crt.cdba09ac with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 20:49:56.125895  480316 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/apiserver.crt.cdba09ac ...
	I1227 20:49:56.125927  480316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/apiserver.crt.cdba09ac: {Name:mke5827c7d0af9aa349e0887699d65f866f87467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:49:56.126100  480316 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/apiserver.key.cdba09ac ...
	I1227 20:49:56.126116  480316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/apiserver.key.cdba09ac: {Name:mkb23f6ef471427ae0a7a8ae73224aeec64cf4d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:49:56.126208  480316 certs.go:382] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/apiserver.crt.cdba09ac -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/apiserver.crt
	I1227 20:49:56.126285  480316 certs.go:386] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/apiserver.key.cdba09ac -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/apiserver.key
	I1227 20:49:56.126351  480316 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/proxy-client.key
	I1227 20:49:56.126368  480316 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/proxy-client.crt with IP's: []
	I1227 20:49:56.455359  480316 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/proxy-client.crt ...
	I1227 20:49:56.455392  480316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/proxy-client.crt: {Name:mk955a7bd0d48b2d9a9febd411fa2acb9b9bde36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:49:56.455580  480316 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/proxy-client.key ...
	I1227 20:49:56.455595  480316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/proxy-client.key: {Name:mk702972411b911560a939d20c384974d3cf68d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:49:56.455785  480316 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:49:56.455831  480316 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:49:56.455846  480316 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:49:56.455874  480316 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:49:56.455902  480316 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:49:56.455936  480316 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:49:56.455982  480316 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:49:56.456518  480316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:49:56.474682  480316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:49:56.492082  480316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:49:56.509861  480316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:49:56.528177  480316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1227 20:49:56.545543  480316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:49:56.562856  480316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:49:56.580260  480316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:49:56.596587  480316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:49:56.613563  480316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:49:56.631098  480316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:49:56.648594  480316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:49:56.661817  480316 ssh_runner.go:195] Run: openssl version
	I1227 20:49:56.668067  480316 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:49:56.675252  480316 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:49:56.682679  480316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:49:56.686432  480316 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:49:56.686529  480316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:49:56.727682  480316 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:49:56.735193  480316 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/274336.pem /etc/ssl/certs/51391683.0
	I1227 20:49:56.742603  480316 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:49:56.750262  480316 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:49:56.758697  480316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:49:56.762759  480316 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:49:56.762871  480316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:49:56.804004  480316 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:49:56.811698  480316 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2743362.pem /etc/ssl/certs/3ec20f2e.0
	I1227 20:49:56.819274  480316 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:49:56.826865  480316 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:49:56.834449  480316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:49:56.838192  480316 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:49:56.838254  480316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:49:56.879005  480316 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:49:56.886420  480316 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 20:49:56.893548  480316 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:49:56.897042  480316 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:49:56.897093  480316 kubeadm.go:401] StartCluster: {Name:old-k8s-version-855707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-855707 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:49:56.897174  480316 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:49:56.897241  480316 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:49:56.924704  480316 cri.go:96] found id: ""
	I1227 20:49:56.924805  480316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:49:56.932498  480316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 20:49:56.940092  480316 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:49:56.940193  480316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:49:56.947916  480316 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:49:56.947950  480316 kubeadm.go:158] found existing configuration files:
	
	I1227 20:49:56.948021  480316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:49:56.955261  480316 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:49:56.955346  480316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:49:56.964532  480316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:49:56.972929  480316 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:49:56.973008  480316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:49:56.980723  480316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:49:56.988960  480316 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:49:56.989026  480316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:49:56.996424  480316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:49:57.004779  480316 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:49:57.004842  480316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:49:57.013655  480316 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:49:57.062412  480316 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1227 20:49:57.062788  480316 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:49:57.106703  480316 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:49:57.106786  480316 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 20:49:57.106827  480316 kubeadm.go:319] OS: Linux
	I1227 20:49:57.106883  480316 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:49:57.106935  480316 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 20:49:57.106986  480316 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:49:57.107037  480316 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:49:57.107088  480316 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:49:57.107150  480316 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:49:57.107199  480316 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:49:57.107250  480316 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:49:57.107300  480316 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 20:49:57.196020  480316 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:49:57.196138  480316 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:49:57.196243  480316 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1227 20:49:57.337401  480316 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 20:49:57.343915  480316 out.go:252]   - Generating certificates and keys ...
	I1227 20:49:57.344039  480316 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:49:57.344137  480316 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:49:58.209613  480316 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 20:49:58.637465  480316 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 20:49:59.251741  480316 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 20:49:59.874012  480316 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 20:50:00.647625  480316 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 20:50:00.648018  480316 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-855707] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 20:50:00.855139  480316 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 20:50:00.855636  480316 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-855707] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 20:50:02.683683  480316 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 20:50:03.112157  480316 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 20:50:03.491106  480316 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 20:50:03.491443  480316 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:50:03.704234  480316 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:50:04.095613  480316 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:50:05.315118  480316 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:50:05.766969  480316 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:50:05.767903  480316 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:50:05.772442  480316 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:50:05.775749  480316 out.go:252]   - Booting up control plane ...
	I1227 20:50:05.775873  480316 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:50:05.776643  480316 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:50:05.778338  480316 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:50:05.795959  480316 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:50:05.796964  480316 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:50:05.797242  480316 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:50:05.939267  480316 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1227 20:50:13.941868  480316 kubeadm.go:319] [apiclient] All control plane components are healthy after 8.002711 seconds
	I1227 20:50:13.941991  480316 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 20:50:13.959500  480316 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 20:50:14.497722  480316 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 20:50:14.498187  480316 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-855707 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 20:50:15.026255  480316 kubeadm.go:319] [bootstrap-token] Using token: 2ig96t.xy392jrnbbar0byr
	I1227 20:50:15.029284  480316 out.go:252]   - Configuring RBAC rules ...
	I1227 20:50:15.029420  480316 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 20:50:15.035607  480316 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 20:50:15.046777  480316 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 20:50:15.052569  480316 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 20:50:15.058048  480316 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 20:50:15.062269  480316 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 20:50:15.079287  480316 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 20:50:15.408374  480316 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 20:50:15.473708  480316 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 20:50:15.475300  480316 kubeadm.go:319] 
	I1227 20:50:15.475375  480316 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 20:50:15.475386  480316 kubeadm.go:319] 
	I1227 20:50:15.475463  480316 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 20:50:15.475472  480316 kubeadm.go:319] 
	I1227 20:50:15.475498  480316 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 20:50:15.475563  480316 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 20:50:15.475626  480316 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 20:50:15.475635  480316 kubeadm.go:319] 
	I1227 20:50:15.475688  480316 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 20:50:15.475704  480316 kubeadm.go:319] 
	I1227 20:50:15.475752  480316 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 20:50:15.475760  480316 kubeadm.go:319] 
	I1227 20:50:15.475812  480316 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 20:50:15.475890  480316 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 20:50:15.475967  480316 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 20:50:15.475975  480316 kubeadm.go:319] 
	I1227 20:50:15.476060  480316 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 20:50:15.476141  480316 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 20:50:15.476148  480316 kubeadm.go:319] 
	I1227 20:50:15.476232  480316 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2ig96t.xy392jrnbbar0byr \
	I1227 20:50:15.476350  480316 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ff29328d1e0d612c7979c16c69d6042f5f31e931d111cc12c8320ed4e4ab5152 \
	I1227 20:50:15.476375  480316 kubeadm.go:319] 	--control-plane 
	I1227 20:50:15.476384  480316 kubeadm.go:319] 
	I1227 20:50:15.476469  480316 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 20:50:15.476477  480316 kubeadm.go:319] 
	I1227 20:50:15.476559  480316 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2ig96t.xy392jrnbbar0byr \
	I1227 20:50:15.476665  480316 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ff29328d1e0d612c7979c16c69d6042f5f31e931d111cc12c8320ed4e4ab5152 
	I1227 20:50:15.482762  480316 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 20:50:15.482896  480316 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:50:15.482930  480316 cni.go:84] Creating CNI manager for ""
	I1227 20:50:15.482939  480316 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:50:15.486134  480316 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 20:50:15.489205  480316 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 20:50:15.499625  480316 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1227 20:50:15.499649  480316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 20:50:15.520032  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 20:50:16.439790  480316 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 20:50:16.439925  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:16.440009  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-855707 minikube.k8s.io/updated_at=2025_12_27T20_50_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562 minikube.k8s.io/name=old-k8s-version-855707 minikube.k8s.io/primary=true
	I1227 20:50:16.574753  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:16.574836  480316 ops.go:34] apiserver oom_adj: -16
	I1227 20:50:17.075588  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:17.575018  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:18.074960  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:18.575330  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:19.075830  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:19.574843  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:20.075781  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:20.575569  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:21.074835  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:21.575731  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:22.074899  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:22.575717  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:23.075282  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:23.575726  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:24.075199  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:24.575121  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:25.075636  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:25.575221  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:26.075119  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:26.575018  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:27.074820  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:27.574888  480316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:50:27.678728  480316 kubeadm.go:1114] duration metric: took 11.238844261s to wait for elevateKubeSystemPrivileges
	I1227 20:50:27.678756  480316 kubeadm.go:403] duration metric: took 30.78166623s to StartCluster
	I1227 20:50:27.678772  480316 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:50:27.678846  480316 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:50:27.679567  480316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:50:27.679777  480316 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:50:27.679940  480316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 20:50:27.680193  480316 config.go:182] Loaded profile config "old-k8s-version-855707": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 20:50:27.680228  480316 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:50:27.680290  480316 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-855707"
	I1227 20:50:27.680303  480316 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-855707"
	I1227 20:50:27.680323  480316 host.go:66] Checking if "old-k8s-version-855707" exists ...
	I1227 20:50:27.680814  480316 cli_runner.go:164] Run: docker container inspect old-k8s-version-855707 --format={{.State.Status}}
	I1227 20:50:27.681493  480316 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-855707"
	I1227 20:50:27.681525  480316 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-855707"
	I1227 20:50:27.681818  480316 cli_runner.go:164] Run: docker container inspect old-k8s-version-855707 --format={{.State.Status}}
	I1227 20:50:27.686811  480316 out.go:179] * Verifying Kubernetes components...
	I1227 20:50:27.695626  480316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:50:27.734907  480316 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-855707"
	I1227 20:50:27.734953  480316 host.go:66] Checking if "old-k8s-version-855707" exists ...
	I1227 20:50:27.735411  480316 cli_runner.go:164] Run: docker container inspect old-k8s-version-855707 --format={{.State.Status}}
	I1227 20:50:27.737781  480316 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:50:27.741574  480316 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:50:27.741596  480316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:50:27.741660  480316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:50:27.781598  480316 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:50:27.781620  480316 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:50:27.781698  480316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:50:27.782961  480316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:50:27.812676  480316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:50:28.066066  480316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:50:28.069036  480316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:50:28.219002  480316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:50:28.219273  480316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 20:50:29.243580  480316 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.024259563s)
	I1227 20:50:29.243659  480316 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1227 20:50:29.243728  480316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.174432503s)
	I1227 20:50:29.244230  480316 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.025063962s)
	I1227 20:50:29.244903  480316 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-855707" to be "Ready" ...
	I1227 20:50:29.247189  480316 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1227 20:50:29.250057  480316 addons.go:530] duration metric: took 1.569821836s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1227 20:50:29.749436  480316 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-855707" context rescaled to 1 replicas
	W1227 20:50:31.247793  480316 node_ready.go:57] node "old-k8s-version-855707" has "Ready":"False" status (will retry)
	W1227 20:50:33.248497  480316 node_ready.go:57] node "old-k8s-version-855707" has "Ready":"False" status (will retry)
	W1227 20:50:35.248936  480316 node_ready.go:57] node "old-k8s-version-855707" has "Ready":"False" status (will retry)
	W1227 20:50:37.748403  480316 node_ready.go:57] node "old-k8s-version-855707" has "Ready":"False" status (will retry)
	W1227 20:50:40.247796  480316 node_ready.go:57] node "old-k8s-version-855707" has "Ready":"False" status (will retry)
	I1227 20:50:42.250058  480316 node_ready.go:49] node "old-k8s-version-855707" is "Ready"
	I1227 20:50:42.250093  480316 node_ready.go:38] duration metric: took 13.005170781s for node "old-k8s-version-855707" to be "Ready" ...
	I1227 20:50:42.250109  480316 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:50:42.250184  480316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:50:42.270103  480316 api_server.go:72] duration metric: took 14.590296822s to wait for apiserver process to appear ...
	I1227 20:50:42.270193  480316 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:50:42.270228  480316 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:50:42.281634  480316 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 20:50:42.284085  480316 api_server.go:141] control plane version: v1.28.0
	I1227 20:50:42.284116  480316 api_server.go:131] duration metric: took 13.902129ms to wait for apiserver health ...
	I1227 20:50:42.284127  480316 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:50:42.290152  480316 system_pods.go:59] 8 kube-system pods found
	I1227 20:50:42.290263  480316 system_pods.go:61] "coredns-5dd5756b68-gpcrh" [a817d3e5-41a0-4029-8f3a-e902cf24169c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:50:42.290321  480316 system_pods.go:61] "etcd-old-k8s-version-855707" [347619f1-fd36-46b8-8280-533c8d8107e6] Running
	I1227 20:50:42.290349  480316 system_pods.go:61] "kindnet-v9n7l" [20b6398f-382b-4304-beda-4f34e8e3a495] Running
	I1227 20:50:42.290371  480316 system_pods.go:61] "kube-apiserver-old-k8s-version-855707" [8bfd9b90-fbb5-4473-b9a4-572d9ccaa1c2] Running
	I1227 20:50:42.290414  480316 system_pods.go:61] "kube-controller-manager-old-k8s-version-855707" [2eff16b1-95f6-4af5-9aea-16016fbf3b59] Running
	I1227 20:50:42.290440  480316 system_pods.go:61] "kube-proxy-57s5h" [fac7868c-241d-4875-9eb7-976a578866b0] Running
	I1227 20:50:42.290465  480316 system_pods.go:61] "kube-scheduler-old-k8s-version-855707" [cd0966b8-145e-4e34-b566-9b91f269eaa5] Running
	I1227 20:50:42.290509  480316 system_pods.go:61] "storage-provisioner" [c3f69f3c-dfab-44a6-a2b9-a993044ed4ec] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:50:42.290538  480316 system_pods.go:74] duration metric: took 6.40366ms to wait for pod list to return data ...
	I1227 20:50:42.290588  480316 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:50:42.294010  480316 default_sa.go:45] found service account: "default"
	I1227 20:50:42.294043  480316 default_sa.go:55] duration metric: took 3.429248ms for default service account to be created ...
	I1227 20:50:42.294055  480316 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:50:42.298171  480316 system_pods.go:86] 8 kube-system pods found
	I1227 20:50:42.298267  480316 system_pods.go:89] "coredns-5dd5756b68-gpcrh" [a817d3e5-41a0-4029-8f3a-e902cf24169c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:50:42.298298  480316 system_pods.go:89] "etcd-old-k8s-version-855707" [347619f1-fd36-46b8-8280-533c8d8107e6] Running
	I1227 20:50:42.298341  480316 system_pods.go:89] "kindnet-v9n7l" [20b6398f-382b-4304-beda-4f34e8e3a495] Running
	I1227 20:50:42.298369  480316 system_pods.go:89] "kube-apiserver-old-k8s-version-855707" [8bfd9b90-fbb5-4473-b9a4-572d9ccaa1c2] Running
	I1227 20:50:42.298394  480316 system_pods.go:89] "kube-controller-manager-old-k8s-version-855707" [2eff16b1-95f6-4af5-9aea-16016fbf3b59] Running
	I1227 20:50:42.298428  480316 system_pods.go:89] "kube-proxy-57s5h" [fac7868c-241d-4875-9eb7-976a578866b0] Running
	I1227 20:50:42.298455  480316 system_pods.go:89] "kube-scheduler-old-k8s-version-855707" [cd0966b8-145e-4e34-b566-9b91f269eaa5] Running
	I1227 20:50:42.298484  480316 system_pods.go:89] "storage-provisioner" [c3f69f3c-dfab-44a6-a2b9-a993044ed4ec] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:50:42.298539  480316 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1227 20:50:42.575950  480316 system_pods.go:86] 8 kube-system pods found
	I1227 20:50:42.575984  480316 system_pods.go:89] "coredns-5dd5756b68-gpcrh" [a817d3e5-41a0-4029-8f3a-e902cf24169c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:50:42.575992  480316 system_pods.go:89] "etcd-old-k8s-version-855707" [347619f1-fd36-46b8-8280-533c8d8107e6] Running
	I1227 20:50:42.575999  480316 system_pods.go:89] "kindnet-v9n7l" [20b6398f-382b-4304-beda-4f34e8e3a495] Running
	I1227 20:50:42.576005  480316 system_pods.go:89] "kube-apiserver-old-k8s-version-855707" [8bfd9b90-fbb5-4473-b9a4-572d9ccaa1c2] Running
	I1227 20:50:42.576010  480316 system_pods.go:89] "kube-controller-manager-old-k8s-version-855707" [2eff16b1-95f6-4af5-9aea-16016fbf3b59] Running
	I1227 20:50:42.576014  480316 system_pods.go:89] "kube-proxy-57s5h" [fac7868c-241d-4875-9eb7-976a578866b0] Running
	I1227 20:50:42.576019  480316 system_pods.go:89] "kube-scheduler-old-k8s-version-855707" [cd0966b8-145e-4e34-b566-9b91f269eaa5] Running
	I1227 20:50:42.576025  480316 system_pods.go:89] "storage-provisioner" [c3f69f3c-dfab-44a6-a2b9-a993044ed4ec] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:50:42.969211  480316 system_pods.go:86] 8 kube-system pods found
	I1227 20:50:42.969294  480316 system_pods.go:89] "coredns-5dd5756b68-gpcrh" [a817d3e5-41a0-4029-8f3a-e902cf24169c] Running
	I1227 20:50:42.969317  480316 system_pods.go:89] "etcd-old-k8s-version-855707" [347619f1-fd36-46b8-8280-533c8d8107e6] Running
	I1227 20:50:42.969337  480316 system_pods.go:89] "kindnet-v9n7l" [20b6398f-382b-4304-beda-4f34e8e3a495] Running
	I1227 20:50:42.969373  480316 system_pods.go:89] "kube-apiserver-old-k8s-version-855707" [8bfd9b90-fbb5-4473-b9a4-572d9ccaa1c2] Running
	I1227 20:50:42.969403  480316 system_pods.go:89] "kube-controller-manager-old-k8s-version-855707" [2eff16b1-95f6-4af5-9aea-16016fbf3b59] Running
	I1227 20:50:42.969425  480316 system_pods.go:89] "kube-proxy-57s5h" [fac7868c-241d-4875-9eb7-976a578866b0] Running
	I1227 20:50:42.969503  480316 system_pods.go:89] "kube-scheduler-old-k8s-version-855707" [cd0966b8-145e-4e34-b566-9b91f269eaa5] Running
	I1227 20:50:42.969529  480316 system_pods.go:89] "storage-provisioner" [c3f69f3c-dfab-44a6-a2b9-a993044ed4ec] Running
	I1227 20:50:42.969555  480316 system_pods.go:126] duration metric: took 675.492056ms to wait for k8s-apps to be running ...
	I1227 20:50:42.969606  480316 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:50:42.969703  480316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:50:42.983572  480316 system_svc.go:56] duration metric: took 13.958702ms WaitForService to wait for kubelet
	I1227 20:50:42.983652  480316 kubeadm.go:587] duration metric: took 15.303849983s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:50:42.983686  480316 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:50:42.990071  480316 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:50:42.990156  480316 node_conditions.go:123] node cpu capacity is 2
	I1227 20:50:42.990184  480316 node_conditions.go:105] duration metric: took 6.478102ms to run NodePressure ...
	I1227 20:50:42.990224  480316 start.go:242] waiting for startup goroutines ...
	I1227 20:50:42.990248  480316 start.go:247] waiting for cluster config update ...
	I1227 20:50:42.990275  480316 start.go:256] writing updated cluster config ...
	I1227 20:50:42.990616  480316 ssh_runner.go:195] Run: rm -f paused
	I1227 20:50:42.998735  480316 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:50:43.003161  480316 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-gpcrh" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:50:43.009582  480316 pod_ready.go:94] pod "coredns-5dd5756b68-gpcrh" is "Ready"
	I1227 20:50:43.009616  480316 pod_ready.go:86] duration metric: took 6.432024ms for pod "coredns-5dd5756b68-gpcrh" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:50:43.014048  480316 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:50:43.024428  480316 pod_ready.go:94] pod "etcd-old-k8s-version-855707" is "Ready"
	I1227 20:50:43.024461  480316 pod_ready.go:86] duration metric: took 10.383702ms for pod "etcd-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:50:43.031057  480316 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:50:43.041558  480316 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-855707" is "Ready"
	I1227 20:50:43.041588  480316 pod_ready.go:86] duration metric: took 10.50346ms for pod "kube-apiserver-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:50:43.044756  480316 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:50:43.403065  480316 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-855707" is "Ready"
	I1227 20:50:43.403096  480316 pod_ready.go:86] duration metric: took 358.315856ms for pod "kube-controller-manager-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:50:43.603771  480316 pod_ready.go:83] waiting for pod "kube-proxy-57s5h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:50:44.003258  480316 pod_ready.go:94] pod "kube-proxy-57s5h" is "Ready"
	I1227 20:50:44.003290  480316 pod_ready.go:86] duration metric: took 399.490216ms for pod "kube-proxy-57s5h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:50:44.203996  480316 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:50:44.602653  480316 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-855707" is "Ready"
	I1227 20:50:44.602680  480316 pod_ready.go:86] duration metric: took 398.65428ms for pod "kube-scheduler-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:50:44.602694  480316 pod_ready.go:40] duration metric: took 1.603871117s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:50:44.655106  480316 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1227 20:50:44.658215  480316 out.go:203] 
	W1227 20:50:44.661077  480316 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1227 20:50:44.664064  480316 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:50:44.667889  480316 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-855707" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:50:42 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:42.412750347Z" level=info msg="Created container 8f6b24676cb773395ee519ea50ab7eceb39d840901e9822a4a7252c0a140155d: kube-system/coredns-5dd5756b68-gpcrh/coredns" id=a85ef442-e5d8-4ce4-b03d-87b78e1d5284 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:50:42 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:42.413366132Z" level=info msg="Starting container: 8f6b24676cb773395ee519ea50ab7eceb39d840901e9822a4a7252c0a140155d" id=53dcb4cb-d9d1-4d92-a311-73d63f824e18 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:50:42 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:42.417187344Z" level=info msg="Started container" PID=1938 containerID=8f6b24676cb773395ee519ea50ab7eceb39d840901e9822a4a7252c0a140155d description=kube-system/coredns-5dd5756b68-gpcrh/coredns id=53dcb4cb-d9d1-4d92-a311-73d63f824e18 name=/runtime.v1.RuntimeService/StartContainer sandboxID=437aa9e66ec0ecc27720557a07ab2ece7bef9f3f5f841f263de73efff76cc0db
	Dec 27 20:50:45 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:45.180695768Z" level=info msg="Running pod sandbox: default/busybox/POD" id=6797084b-b138-455f-8914-bcb957f317f7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:50:45 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:45.18078443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:50:45 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:45.204339994Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a4c71c3933a33293d3cc908186cd23cc5287ec063983cd73abfab4ce96bf7c05 UID:9c2c639a-7368-4a9e-ad13-67a2e87b202b NetNS:/var/run/netns/c05fcce3-b242-4ee0-b66c-226354c7324e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40026e27c0}] Aliases:map[]}"
	Dec 27 20:50:45 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:45.204378056Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 27 20:50:45 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:45.260627861Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a4c71c3933a33293d3cc908186cd23cc5287ec063983cd73abfab4ce96bf7c05 UID:9c2c639a-7368-4a9e-ad13-67a2e87b202b NetNS:/var/run/netns/c05fcce3-b242-4ee0-b66c-226354c7324e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40026e27c0}] Aliases:map[]}"
	Dec 27 20:50:45 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:45.260974Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 27 20:50:45 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:45.274532288Z" level=info msg="Ran pod sandbox a4c71c3933a33293d3cc908186cd23cc5287ec063983cd73abfab4ce96bf7c05 with infra container: default/busybox/POD" id=6797084b-b138-455f-8914-bcb957f317f7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:50:45 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:45.276153163Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c947d0ad-4380-473b-9d28-099bc954cfc7 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:50:45 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:45.276314036Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c947d0ad-4380-473b-9d28-099bc954cfc7 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:50:45 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:45.276357719Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c947d0ad-4380-473b-9d28-099bc954cfc7 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:50:45 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:45.281883185Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0cbd73ef-743d-4495-8baf-fe17f613e874 name=/runtime.v1.ImageService/PullImage
	Dec 27 20:50:45 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:45.284572213Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 20:50:47 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:47.293762703Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=0cbd73ef-743d-4495-8baf-fe17f613e874 name=/runtime.v1.ImageService/PullImage
	Dec 27 20:50:47 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:47.296252804Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bb682ae3-b563-4e5f-87d7-ce646a413e75 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:50:47 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:47.298286968Z" level=info msg="Creating container: default/busybox/busybox" id=9b94d887-f0eb-4593-87a9-ec9be8fb1340 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:50:47 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:47.298620726Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:50:47 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:47.305987877Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:50:47 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:47.307056506Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:50:47 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:47.32411659Z" level=info msg="Created container f2d309291b054cb6579d7366c6b81009bd4fc93ed71127dc42d15624e2afa6f2: default/busybox/busybox" id=9b94d887-f0eb-4593-87a9-ec9be8fb1340 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:50:47 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:47.325096205Z" level=info msg="Starting container: f2d309291b054cb6579d7366c6b81009bd4fc93ed71127dc42d15624e2afa6f2" id=a1314e1e-be6e-4c40-bee8-981edcde78dd name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:50:47 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:47.326811592Z" level=info msg="Started container" PID=1997 containerID=f2d309291b054cb6579d7366c6b81009bd4fc93ed71127dc42d15624e2afa6f2 description=default/busybox/busybox id=a1314e1e-be6e-4c40-bee8-981edcde78dd name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4c71c3933a33293d3cc908186cd23cc5287ec063983cd73abfab4ce96bf7c05
	Dec 27 20:50:53 old-k8s-version-855707 crio[835]: time="2025-12-27T20:50:53.06991171Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	f2d309291b054       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   a4c71c3933a33       busybox                                          default
	8f6b24676cb77       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      12 seconds ago      Running             coredns                   0                   437aa9e66ec0e       coredns-5dd5756b68-gpcrh                         kube-system
	d5c4a6dbd994a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   cb8ebca1c4ab6       storage-provisioner                              kube-system
	0500156f1eeef       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    23 seconds ago      Running             kindnet-cni               0                   0b1e72abad476       kindnet-v9n7l                                    kube-system
	554db5f6a11d1       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      25 seconds ago      Running             kube-proxy                0                   5ebaa6d413d52       kube-proxy-57s5h                                 kube-system
	5e2345dd5d134       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      46 seconds ago      Running             kube-controller-manager   0                   fbed857dcaace       kube-controller-manager-old-k8s-version-855707   kube-system
	81c8aeddbbf62       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      46 seconds ago      Running             kube-scheduler            0                   da0e735ab1066       kube-scheduler-old-k8s-version-855707            kube-system
	70e6ed121918f       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      46 seconds ago      Running             etcd                      0                   87fd802f7dffb       etcd-old-k8s-version-855707                      kube-system
	457c8aa91c4f4       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      46 seconds ago      Running             kube-apiserver            0                   4c5ee908c85e3       kube-apiserver-old-k8s-version-855707            kube-system
	
	
	==> coredns [8f6b24676cb773395ee519ea50ab7eceb39d840901e9822a4a7252c0a140155d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60584 - 4223 "HINFO IN 1997994040123639516.1203973187989114088. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011301057s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-855707
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-855707
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=old-k8s-version-855707
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_50_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:50:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-855707
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:50:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:50:46 +0000   Sat, 27 Dec 2025 20:50:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:50:46 +0000   Sat, 27 Dec 2025 20:50:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:50:46 +0000   Sat, 27 Dec 2025 20:50:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:50:46 +0000   Sat, 27 Dec 2025 20:50:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-855707
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                6f46b687-0cf7-4b64-a058-88b55b2f77d5
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-gpcrh                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     26s
	  kube-system                 etcd-old-k8s-version-855707                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         39s
	  kube-system                 kindnet-v9n7l                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-855707             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-855707    200m (10%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-57s5h                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-old-k8s-version-855707             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node old-k8s-version-855707 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node old-k8s-version-855707 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x8 over 47s)  kubelet          Node old-k8s-version-855707 status is now: NodeHasSufficientPID
	  Normal  Starting                 39s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s                kubelet          Node old-k8s-version-855707 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s                kubelet          Node old-k8s-version-855707 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s                kubelet          Node old-k8s-version-855707 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node old-k8s-version-855707 event: Registered Node old-k8s-version-855707 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-855707 status is now: NodeReady
	
	
	==> dmesg <==
	[ +35.129102] overlayfs: idmapped layers are currently not supported
	[Dec27 20:17] overlayfs: idmapped layers are currently not supported
	[Dec27 20:19] overlayfs: idmapped layers are currently not supported
	[ +36.244108] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 20:22] overlayfs: idmapped layers are currently not supported
	[Dec27 20:23] overlayfs: idmapped layers are currently not supported
	[Dec27 20:24] overlayfs: idmapped layers are currently not supported
	[Dec27 20:25] overlayfs: idmapped layers are currently not supported
	[ +35.447549] overlayfs: idmapped layers are currently not supported
	[Dec27 20:26] overlayfs: idmapped layers are currently not supported
	[Dec27 20:27] overlayfs: idmapped layers are currently not supported
	[  +6.770645] overlayfs: idmapped layers are currently not supported
	[Dec27 20:28] overlayfs: idmapped layers are currently not supported
	[ +25.872751] overlayfs: idmapped layers are currently not supported
	[Dec27 20:29] overlayfs: idmapped layers are currently not supported
	[ +32.997137] overlayfs: idmapped layers are currently not supported
	[Dec27 20:31] overlayfs: idmapped layers are currently not supported
	[Dec27 20:33] overlayfs: idmapped layers are currently not supported
	[ +33.772475] overlayfs: idmapped layers are currently not supported
	[Dec27 20:39] overlayfs: idmapped layers are currently not supported
	[Dec27 20:40] overlayfs: idmapped layers are currently not supported
	[Dec27 20:44] overlayfs: idmapped layers are currently not supported
	[Dec27 20:45] overlayfs: idmapped layers are currently not supported
	[Dec27 20:49] overlayfs: idmapped layers are currently not supported
	[Dec27 20:50] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [70e6ed121918fde34cf6514b602e72bf69a55f6577096390ed62e49d4279add2] <==
	{"level":"info","ts":"2025-12-27T20:50:07.661037Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T20:50:07.673969Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"ea7e25599daad906","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-12-27T20:50:07.674209Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:50:07.674292Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:50:07.674335Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:50:07.674654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T20:50:07.674789Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-27T20:50:08.627808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T20:50:08.627931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T20:50:08.627982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-27T20:50:08.628051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:50:08.628084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:50:08.628125Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-27T20:50:08.628172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:50:08.633642Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-855707 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:50:08.633852Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:50:08.634908Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:50:08.634983Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T20:50:08.635092Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T20:50:08.636034Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:50:08.637832Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:50:08.637861Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:50:08.637983Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T20:50:08.638066Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T20:50:08.638122Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 20:50:54 up  2:33,  0 user,  load average: 1.88, 1.77, 1.88
	Linux old-k8s-version-855707 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0500156f1eeef1186208a78ee8066df467f4c68890a5147579a3da3a3722897a] <==
	I1227 20:50:31.235096       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:50:31.235544       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 20:50:31.235731       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:50:31.235753       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:50:31.235767       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:50:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:50:31.436575       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:50:31.436637       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:50:31.436671       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:50:31.437421       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 20:50:31.636754       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:50:31.636780       1 metrics.go:72] Registering metrics
	I1227 20:50:31.636841       1 controller.go:711] "Syncing nftables rules"
	I1227 20:50:41.436335       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:50:41.436389       1 main.go:301] handling current node
	I1227 20:50:51.438535       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:50:51.438574       1 main.go:301] handling current node
	
	
	==> kube-apiserver [457c8aa91c4f49430d8f9c8ebc372dfa53f6717a1782a82be264468191f4031b] <==
	I1227 20:50:12.088608       1 controller.go:624] quota admission added evaluator for: namespaces
	I1227 20:50:12.093201       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1227 20:50:12.097391       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1227 20:50:12.093337       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1227 20:50:12.096075       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:50:12.097940       1 aggregator.go:166] initial CRD sync complete...
	I1227 20:50:12.097986       1 autoregister_controller.go:141] Starting autoregister controller
	I1227 20:50:12.098015       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:50:12.098045       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:50:12.800317       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1227 20:50:12.804544       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1227 20:50:12.804572       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1227 20:50:13.371653       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:50:13.429884       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:50:13.517834       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 20:50:13.529401       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1227 20:50:13.530675       1 controller.go:624] quota admission added evaluator for: endpoints
	I1227 20:50:13.535601       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:50:14.002376       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1227 20:50:15.389657       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1227 20:50:15.406736       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 20:50:15.423855       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1227 20:50:27.808981       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1227 20:50:27.891504       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E1227 20:50:53.127827       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.76.2:36910->192.168.76.2:10250: write: broken pipe
	
	
	==> kube-controller-manager [5e2345dd5d134ce43290cf5e5bd1d3e35a9b8c970cc8f535f3e6f00e5f71e3f5] <==
	I1227 20:50:27.356296       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1227 20:50:27.356310       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1227 20:50:27.412467       1 shared_informer.go:318] Caches are synced for resource quota
	I1227 20:50:27.791276       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 20:50:27.791305       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1227 20:50:27.805872       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 20:50:27.858378       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1227 20:50:28.058158       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-v9n7l"
	I1227 20:50:28.058361       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-57s5h"
	I1227 20:50:28.335239       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-gpcrh"
	I1227 20:50:28.364364       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-qjb4g"
	I1227 20:50:28.399763       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="550.117463ms"
	I1227 20:50:28.419180       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.373629ms"
	I1227 20:50:28.420170       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.839µs"
	I1227 20:50:28.443341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="117.174µs"
	I1227 20:50:29.277540       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1227 20:50:29.322191       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-qjb4g"
	I1227 20:50:29.338862       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.332432ms"
	I1227 20:50:29.359875       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.850386ms"
	I1227 20:50:29.360363       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="255.443µs"
	I1227 20:50:42.033666       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="117.329µs"
	I1227 20:50:42.060434       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.479µs"
	I1227 20:50:42.227017       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1227 20:50:42.771007       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.43824ms"
	I1227 20:50:42.771294       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.002µs"
	
	
	==> kube-proxy [554db5f6a11d142cbe1c2e45d13895f1b7084ca65403d90c1e29da5b80b7c83e] <==
	I1227 20:50:28.640592       1 server_others.go:69] "Using iptables proxy"
	I1227 20:50:28.672597       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1227 20:50:28.758784       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:50:28.765375       1 server_others.go:152] "Using iptables Proxier"
	I1227 20:50:28.765412       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1227 20:50:28.765419       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1227 20:50:28.765567       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1227 20:50:28.765781       1 server.go:846] "Version info" version="v1.28.0"
	I1227 20:50:28.765791       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:50:28.766726       1 config.go:188] "Starting service config controller"
	I1227 20:50:28.766749       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1227 20:50:28.766766       1 config.go:97] "Starting endpoint slice config controller"
	I1227 20:50:28.766769       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1227 20:50:28.767259       1 config.go:315] "Starting node config controller"
	I1227 20:50:28.767266       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1227 20:50:28.868683       1 shared_informer.go:318] Caches are synced for node config
	I1227 20:50:28.868715       1 shared_informer.go:318] Caches are synced for service config
	I1227 20:50:28.868755       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [81c8aeddbbf62a8d34de355883b71fb20c4dabefe75dd7fb3ba5db8f1e43e01a] <==
	W1227 20:50:12.057696       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1227 20:50:12.057820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1227 20:50:12.057805       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1227 20:50:12.057878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1227 20:50:12.057918       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1227 20:50:12.057889       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1227 20:50:12.058002       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1227 20:50:12.058019       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1227 20:50:12.058138       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1227 20:50:12.058200       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1227 20:50:12.875584       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1227 20:50:12.875625       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1227 20:50:12.943930       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1227 20:50:12.943971       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 20:50:12.971545       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1227 20:50:12.971661       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1227 20:50:12.992307       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1227 20:50:12.992416       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1227 20:50:13.059704       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1227 20:50:13.059808       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1227 20:50:13.097385       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1227 20:50:13.097528       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1227 20:50:13.143808       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1227 20:50:13.143913       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1227 20:50:15.145003       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 27 20:50:28 old-k8s-version-855707 kubelet[1388]: I1227 20:50:28.162605    1388 topology_manager.go:215] "Topology Admit Handler" podUID="fac7868c-241d-4875-9eb7-976a578866b0" podNamespace="kube-system" podName="kube-proxy-57s5h"
	Dec 27 20:50:28 old-k8s-version-855707 kubelet[1388]: I1227 20:50:28.193949    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fac7868c-241d-4875-9eb7-976a578866b0-kube-proxy\") pod \"kube-proxy-57s5h\" (UID: \"fac7868c-241d-4875-9eb7-976a578866b0\") " pod="kube-system/kube-proxy-57s5h"
	Dec 27 20:50:28 old-k8s-version-855707 kubelet[1388]: I1227 20:50:28.194005    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fac7868c-241d-4875-9eb7-976a578866b0-lib-modules\") pod \"kube-proxy-57s5h\" (UID: \"fac7868c-241d-4875-9eb7-976a578866b0\") " pod="kube-system/kube-proxy-57s5h"
	Dec 27 20:50:28 old-k8s-version-855707 kubelet[1388]: I1227 20:50:28.194036    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/20b6398f-382b-4304-beda-4f34e8e3a495-cni-cfg\") pod \"kindnet-v9n7l\" (UID: \"20b6398f-382b-4304-beda-4f34e8e3a495\") " pod="kube-system/kindnet-v9n7l"
	Dec 27 20:50:28 old-k8s-version-855707 kubelet[1388]: I1227 20:50:28.194058    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20b6398f-382b-4304-beda-4f34e8e3a495-lib-modules\") pod \"kindnet-v9n7l\" (UID: \"20b6398f-382b-4304-beda-4f34e8e3a495\") " pod="kube-system/kindnet-v9n7l"
	Dec 27 20:50:28 old-k8s-version-855707 kubelet[1388]: I1227 20:50:28.194087    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dsmz\" (UniqueName: \"kubernetes.io/projected/20b6398f-382b-4304-beda-4f34e8e3a495-kube-api-access-5dsmz\") pod \"kindnet-v9n7l\" (UID: \"20b6398f-382b-4304-beda-4f34e8e3a495\") " pod="kube-system/kindnet-v9n7l"
	Dec 27 20:50:28 old-k8s-version-855707 kubelet[1388]: I1227 20:50:28.194110    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fac7868c-241d-4875-9eb7-976a578866b0-xtables-lock\") pod \"kube-proxy-57s5h\" (UID: \"fac7868c-241d-4875-9eb7-976a578866b0\") " pod="kube-system/kube-proxy-57s5h"
	Dec 27 20:50:28 old-k8s-version-855707 kubelet[1388]: I1227 20:50:28.194135    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xkm7\" (UniqueName: \"kubernetes.io/projected/fac7868c-241d-4875-9eb7-976a578866b0-kube-api-access-7xkm7\") pod \"kube-proxy-57s5h\" (UID: \"fac7868c-241d-4875-9eb7-976a578866b0\") " pod="kube-system/kube-proxy-57s5h"
	Dec 27 20:50:28 old-k8s-version-855707 kubelet[1388]: I1227 20:50:28.194161    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20b6398f-382b-4304-beda-4f34e8e3a495-xtables-lock\") pod \"kindnet-v9n7l\" (UID: \"20b6398f-382b-4304-beda-4f34e8e3a495\") " pod="kube-system/kindnet-v9n7l"
	Dec 27 20:50:28 old-k8s-version-855707 kubelet[1388]: W1227 20:50:28.456575    1388 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2/crio-0b1e72abad4761ce4fa15936758fb6ea705c300c0149c4f8710cd1393b614a6f WatchSource:0}: Error finding container 0b1e72abad4761ce4fa15936758fb6ea705c300c0149c4f8710cd1393b614a6f: Status 404 returned error can't find the container with id 0b1e72abad4761ce4fa15936758fb6ea705c300c0149c4f8710cd1393b614a6f
	Dec 27 20:50:31 old-k8s-version-855707 kubelet[1388]: I1227 20:50:31.721325    1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-57s5h" podStartSLOduration=3.7212796900000003 podCreationTimestamp="2025-12-27 20:50:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:50:28.722139985 +0000 UTC m=+13.369876556" watchObservedRunningTime="2025-12-27 20:50:31.72127969 +0000 UTC m=+16.369016261"
	Dec 27 20:50:35 old-k8s-version-855707 kubelet[1388]: I1227 20:50:35.594355    1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-v9n7l" podStartSLOduration=5.964476224 podCreationTimestamp="2025-12-27 20:50:27 +0000 UTC" firstStartedPulling="2025-12-27 20:50:28.461283446 +0000 UTC m=+13.109020009" lastFinishedPulling="2025-12-27 20:50:31.091120897 +0000 UTC m=+15.738857468" observedRunningTime="2025-12-27 20:50:31.722847922 +0000 UTC m=+16.370584493" watchObservedRunningTime="2025-12-27 20:50:35.594313683 +0000 UTC m=+20.242050246"
	Dec 27 20:50:41 old-k8s-version-855707 kubelet[1388]: I1227 20:50:41.975805    1388 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 27 20:50:42 old-k8s-version-855707 kubelet[1388]: I1227 20:50:42.025498    1388 topology_manager.go:215] "Topology Admit Handler" podUID="c3f69f3c-dfab-44a6-a2b9-a993044ed4ec" podNamespace="kube-system" podName="storage-provisioner"
	Dec 27 20:50:42 old-k8s-version-855707 kubelet[1388]: I1227 20:50:42.029368    1388 topology_manager.go:215] "Topology Admit Handler" podUID="a817d3e5-41a0-4029-8f3a-e902cf24169c" podNamespace="kube-system" podName="coredns-5dd5756b68-gpcrh"
	Dec 27 20:50:42 old-k8s-version-855707 kubelet[1388]: I1227 20:50:42.087815    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljt4c\" (UniqueName: \"kubernetes.io/projected/a817d3e5-41a0-4029-8f3a-e902cf24169c-kube-api-access-ljt4c\") pod \"coredns-5dd5756b68-gpcrh\" (UID: \"a817d3e5-41a0-4029-8f3a-e902cf24169c\") " pod="kube-system/coredns-5dd5756b68-gpcrh"
	Dec 27 20:50:42 old-k8s-version-855707 kubelet[1388]: I1227 20:50:42.087892    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c3f69f3c-dfab-44a6-a2b9-a993044ed4ec-tmp\") pod \"storage-provisioner\" (UID: \"c3f69f3c-dfab-44a6-a2b9-a993044ed4ec\") " pod="kube-system/storage-provisioner"
	Dec 27 20:50:42 old-k8s-version-855707 kubelet[1388]: I1227 20:50:42.087932    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a817d3e5-41a0-4029-8f3a-e902cf24169c-config-volume\") pod \"coredns-5dd5756b68-gpcrh\" (UID: \"a817d3e5-41a0-4029-8f3a-e902cf24169c\") " pod="kube-system/coredns-5dd5756b68-gpcrh"
	Dec 27 20:50:42 old-k8s-version-855707 kubelet[1388]: I1227 20:50:42.087960    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp4kn\" (UniqueName: \"kubernetes.io/projected/c3f69f3c-dfab-44a6-a2b9-a993044ed4ec-kube-api-access-pp4kn\") pod \"storage-provisioner\" (UID: \"c3f69f3c-dfab-44a6-a2b9-a993044ed4ec\") " pod="kube-system/storage-provisioner"
	Dec 27 20:50:42 old-k8s-version-855707 kubelet[1388]: W1227 20:50:42.376294    1388 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2/crio-437aa9e66ec0ecc27720557a07ab2ece7bef9f3f5f841f263de73efff76cc0db WatchSource:0}: Error finding container 437aa9e66ec0ecc27720557a07ab2ece7bef9f3f5f841f263de73efff76cc0db: Status 404 returned error can't find the container with id 437aa9e66ec0ecc27720557a07ab2ece7bef9f3f5f841f263de73efff76cc0db
	Dec 27 20:50:42 old-k8s-version-855707 kubelet[1388]: I1227 20:50:42.759564    1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.759519555 podCreationTimestamp="2025-12-27 20:50:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:50:42.745627509 +0000 UTC m=+27.393364080" watchObservedRunningTime="2025-12-27 20:50:42.759519555 +0000 UTC m=+27.407256118"
	Dec 27 20:50:42 old-k8s-version-855707 kubelet[1388]: I1227 20:50:42.759649    1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-gpcrh" podStartSLOduration=14.759630821 podCreationTimestamp="2025-12-27 20:50:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:50:42.759334855 +0000 UTC m=+27.407071426" watchObservedRunningTime="2025-12-27 20:50:42.759630821 +0000 UTC m=+27.407367392"
	Dec 27 20:50:44 old-k8s-version-855707 kubelet[1388]: I1227 20:50:44.879002    1388 topology_manager.go:215] "Topology Admit Handler" podUID="9c2c639a-7368-4a9e-ad13-67a2e87b202b" podNamespace="default" podName="busybox"
	Dec 27 20:50:44 old-k8s-version-855707 kubelet[1388]: I1227 20:50:44.909604    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsp8h\" (UniqueName: \"kubernetes.io/projected/9c2c639a-7368-4a9e-ad13-67a2e87b202b-kube-api-access-wsp8h\") pod \"busybox\" (UID: \"9c2c639a-7368-4a9e-ad13-67a2e87b202b\") " pod="default/busybox"
	Dec 27 20:50:45 old-k8s-version-855707 kubelet[1388]: W1227 20:50:45.267446    1388 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2/crio-a4c71c3933a33293d3cc908186cd23cc5287ec063983cd73abfab4ce96bf7c05 WatchSource:0}: Error finding container a4c71c3933a33293d3cc908186cd23cc5287ec063983cd73abfab4ce96bf7c05: Status 404 returned error can't find the container with id a4c71c3933a33293d3cc908186cd23cc5287ec063983cd73abfab4ce96bf7c05
	
	
	==> storage-provisioner [d5c4a6dbd994afaeca6cdd5cbf96e3cb65728796c440b09d62059053df67646d] <==
	I1227 20:50:42.438651       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 20:50:42.453613       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:50:42.453803       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1227 20:50:42.465791       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:50:42.466048       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-855707_a885d874-5217-4caa-bb4e-e51e3e48cdd2!
	I1227 20:50:42.471583       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c3590326-14f9-4148-8efd-a85b09a3c11f", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-855707_a885d874-5217-4caa-bb4e-e51e3e48cdd2 became leader
	I1227 20:50:42.567853       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-855707_a885d874-5217-4caa-bb4e-e51e3e48cdd2!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-855707 -n old-k8s-version-855707
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-855707 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-855707 --alsologtostderr -v=1
E1227 20:52:13.967176  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-855707 --alsologtostderr -v=1: exit status 80 (1.819312782s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-855707 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:52:12.491433  487154 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:52:12.491578  487154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:52:12.491584  487154 out.go:374] Setting ErrFile to fd 2...
	I1227 20:52:12.491600  487154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:52:12.491913  487154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:52:12.492196  487154 out.go:368] Setting JSON to false
	I1227 20:52:12.492225  487154 mustload.go:66] Loading cluster: old-k8s-version-855707
	I1227 20:52:12.492677  487154 config.go:182] Loaded profile config "old-k8s-version-855707": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 20:52:12.493203  487154 cli_runner.go:164] Run: docker container inspect old-k8s-version-855707 --format={{.State.Status}}
	I1227 20:52:12.513184  487154 host.go:66] Checking if "old-k8s-version-855707" exists ...
	I1227 20:52:12.513542  487154 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:52:12.582884  487154 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-27 20:52:12.572071312 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:52:12.583530  487154 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22332/minikube-v1.37.0-1766811082-22332-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766811082-22332/minikube-v1.37.0-1766811082-22332-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766811082-22332-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:old-k8s-version-855707 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s
(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 20:52:12.590292  487154 out.go:179] * Pausing node old-k8s-version-855707 ... 
	I1227 20:52:12.594471  487154 host.go:66] Checking if "old-k8s-version-855707" exists ...
	I1227 20:52:12.594910  487154 ssh_runner.go:195] Run: systemctl --version
	I1227 20:52:12.594955  487154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:52:12.613132  487154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:52:12.712500  487154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:52:12.738300  487154 pause.go:52] kubelet running: true
	I1227 20:52:12.738371  487154 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:52:12.976971  487154 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:52:12.977073  487154 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:52:13.044674  487154 cri.go:96] found id: "c896e123ec6228b1d4e8e675ab96d63527b576dc221b62eaba55cf1b039bbaab"
	I1227 20:52:13.044697  487154 cri.go:96] found id: "472e9ef0638e8f491ece4877ddb32d0f7b578b7729e6009843e0f87891b75f03"
	I1227 20:52:13.044702  487154 cri.go:96] found id: "990f851e3eb3233bbb21418beecefa82d2748a136f67502e18f0e49d805ab852"
	I1227 20:52:13.044706  487154 cri.go:96] found id: "b609c90a033605921a0c78fea186518f1183108bfde56e07abdd5ad07826877b"
	I1227 20:52:13.044709  487154 cri.go:96] found id: "045ae702e454a1e50b74425e5caee0ffdfc04f73a5b99e25e3bcab98cb86fbb5"
	I1227 20:52:13.044713  487154 cri.go:96] found id: "7d4c3f3f4c978744c9c5787250f663c59c07f863a1314cc8b1c62aeb93bd69f7"
	I1227 20:52:13.044716  487154 cri.go:96] found id: "dbb47e0c12746be14a55cae562bfb8e8d54317017f8f82d613e967bb89746d7e"
	I1227 20:52:13.044720  487154 cri.go:96] found id: "1aee215b0f3720a59c0901866e7a32993b55fc4f9c1a946cd923bcfd33eef1dd"
	I1227 20:52:13.044723  487154 cri.go:96] found id: "f81dcf2a161529d4fcaa1a2ecd3b730d50a5f75c24fdaf286d568185a6ad7aad"
	I1227 20:52:13.044740  487154 cri.go:96] found id: "9f13af8e3cf48801f0478b82b99ee66bcfc73622a69c8aac7a5a9bc87d6b8dad"
	I1227 20:52:13.044744  487154 cri.go:96] found id: "e9fac01bcc780f0a4cbad6be58cc6198eff0b31d06a677ed0563d738670e0d3f"
	I1227 20:52:13.044747  487154 cri.go:96] found id: ""
	I1227 20:52:13.044797  487154 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:52:13.055421  487154 retry.go:84] will retry after 400ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:52:13Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:52:13.430049  487154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:52:13.442510  487154 pause.go:52] kubelet running: false
	I1227 20:52:13.442576  487154 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:52:13.615295  487154 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:52:13.615385  487154 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:52:13.680171  487154 cri.go:96] found id: "c896e123ec6228b1d4e8e675ab96d63527b576dc221b62eaba55cf1b039bbaab"
	I1227 20:52:13.680249  487154 cri.go:96] found id: "472e9ef0638e8f491ece4877ddb32d0f7b578b7729e6009843e0f87891b75f03"
	I1227 20:52:13.680268  487154 cri.go:96] found id: "990f851e3eb3233bbb21418beecefa82d2748a136f67502e18f0e49d805ab852"
	I1227 20:52:13.680285  487154 cri.go:96] found id: "b609c90a033605921a0c78fea186518f1183108bfde56e07abdd5ad07826877b"
	I1227 20:52:13.680323  487154 cri.go:96] found id: "045ae702e454a1e50b74425e5caee0ffdfc04f73a5b99e25e3bcab98cb86fbb5"
	I1227 20:52:13.680348  487154 cri.go:96] found id: "7d4c3f3f4c978744c9c5787250f663c59c07f863a1314cc8b1c62aeb93bd69f7"
	I1227 20:52:13.680370  487154 cri.go:96] found id: "dbb47e0c12746be14a55cae562bfb8e8d54317017f8f82d613e967bb89746d7e"
	I1227 20:52:13.680403  487154 cri.go:96] found id: "1aee215b0f3720a59c0901866e7a32993b55fc4f9c1a946cd923bcfd33eef1dd"
	I1227 20:52:13.680430  487154 cri.go:96] found id: "f81dcf2a161529d4fcaa1a2ecd3b730d50a5f75c24fdaf286d568185a6ad7aad"
	I1227 20:52:13.680453  487154 cri.go:96] found id: "9f13af8e3cf48801f0478b82b99ee66bcfc73622a69c8aac7a5a9bc87d6b8dad"
	I1227 20:52:13.680470  487154 cri.go:96] found id: "e9fac01bcc780f0a4cbad6be58cc6198eff0b31d06a677ed0563d738670e0d3f"
	I1227 20:52:13.680490  487154 cri.go:96] found id: ""
	I1227 20:52:13.680571  487154 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:52:13.942643  487154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:52:13.955760  487154 pause.go:52] kubelet running: false
	I1227 20:52:13.955841  487154 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:52:14.150120  487154 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:52:14.150235  487154 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:52:14.213794  487154 cri.go:96] found id: "c896e123ec6228b1d4e8e675ab96d63527b576dc221b62eaba55cf1b039bbaab"
	I1227 20:52:14.213865  487154 cri.go:96] found id: "472e9ef0638e8f491ece4877ddb32d0f7b578b7729e6009843e0f87891b75f03"
	I1227 20:52:14.213876  487154 cri.go:96] found id: "990f851e3eb3233bbb21418beecefa82d2748a136f67502e18f0e49d805ab852"
	I1227 20:52:14.213881  487154 cri.go:96] found id: "b609c90a033605921a0c78fea186518f1183108bfde56e07abdd5ad07826877b"
	I1227 20:52:14.213884  487154 cri.go:96] found id: "045ae702e454a1e50b74425e5caee0ffdfc04f73a5b99e25e3bcab98cb86fbb5"
	I1227 20:52:14.213888  487154 cri.go:96] found id: "7d4c3f3f4c978744c9c5787250f663c59c07f863a1314cc8b1c62aeb93bd69f7"
	I1227 20:52:14.213891  487154 cri.go:96] found id: "dbb47e0c12746be14a55cae562bfb8e8d54317017f8f82d613e967bb89746d7e"
	I1227 20:52:14.213894  487154 cri.go:96] found id: "1aee215b0f3720a59c0901866e7a32993b55fc4f9c1a946cd923bcfd33eef1dd"
	I1227 20:52:14.213897  487154 cri.go:96] found id: "f81dcf2a161529d4fcaa1a2ecd3b730d50a5f75c24fdaf286d568185a6ad7aad"
	I1227 20:52:14.213903  487154 cri.go:96] found id: "9f13af8e3cf48801f0478b82b99ee66bcfc73622a69c8aac7a5a9bc87d6b8dad"
	I1227 20:52:14.213911  487154 cri.go:96] found id: "e9fac01bcc780f0a4cbad6be58cc6198eff0b31d06a677ed0563d738670e0d3f"
	I1227 20:52:14.213914  487154 cri.go:96] found id: ""
	I1227 20:52:14.213964  487154 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:52:14.228744  487154 out.go:203] 
	W1227 20:52:14.232415  487154 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:52:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:52:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 20:52:14.232446  487154 out.go:285] * 
	* 
	W1227 20:52:14.235964  487154 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:52:14.241414  487154 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-855707 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-855707
helpers_test.go:244: (dbg) docker inspect old-k8s-version-855707:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2",
	        "Created": "2025-12-27T20:49:49.083982112Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 484582,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:51:08.05684373Z",
	            "FinishedAt": "2025-12-27T20:51:07.268796901Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2/hostname",
	        "HostsPath": "/var/lib/docker/containers/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2/hosts",
	        "LogPath": "/var/lib/docker/containers/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2-json.log",
	        "Name": "/old-k8s-version-855707",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-855707:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-855707",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2",
	                "LowerDir": "/var/lib/docker/overlay2/4cfbcea77308f009a9856e9df5c3a29b9bfd669c592158a020d3e7751dc1e39a-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4cfbcea77308f009a9856e9df5c3a29b9bfd669c592158a020d3e7751dc1e39a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4cfbcea77308f009a9856e9df5c3a29b9bfd669c592158a020d3e7751dc1e39a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4cfbcea77308f009a9856e9df5c3a29b9bfd669c592158a020d3e7751dc1e39a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-855707",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-855707/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-855707",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-855707",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-855707",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ddd82653966951316be61d0975c9ce25540a7124177cce3580b840e086388eac",
	            "SandboxKey": "/var/run/docker/netns/ddd826539669",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33414"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33417"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-855707": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:ce:6e:6e:d5:43",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "30f18a4a5fe47b52fd514e9e7c68df45288c84a3f84ad77d2d2746ff085abb75",
	                    "EndpointID": "d851d0916c80ba1480c2eb386e6fa4bb51e709fa7ce25c24cc5c624b2af0cb11",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-855707",
	                        "ffdc66f60c1f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-855707 -n old-k8s-version-855707
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-855707 -n old-k8s-version-855707: exit status 2 (338.384643ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-855707 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-855707 logs -n 25: (1.30089429s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-037975 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo containerd config dump                                                                                                                                                                                                  │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo crio config                                                                                                                                                                                                             │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ delete  │ -p cilium-037975                                                                                                                                                                                                                              │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │ 27 Dec 25 20:45 UTC │
	│ start   │ -p cert-expiration-629954 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-629954    │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │ 27 Dec 25 20:45 UTC │
	│ start   │ -p cert-expiration-629954 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-629954    │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │ 27 Dec 25 20:48 UTC │
	│ delete  │ -p cert-expiration-629954                                                                                                                                                                                                                     │ cert-expiration-629954    │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │ 27 Dec 25 20:48 UTC │
	│ start   │ -p force-systemd-flag-604544 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-604544 │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │                     │
	│ delete  │ -p force-systemd-env-859716                                                                                                                                                                                                                   │ force-systemd-env-859716  │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ start   │ -p cert-options-765175 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-765175       │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ ssh     │ cert-options-765175 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-765175       │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ ssh     │ -p cert-options-765175 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-765175       │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ delete  │ -p cert-options-765175                                                                                                                                                                                                                        │ cert-options-765175       │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ start   │ -p old-k8s-version-855707 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-855707    │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-855707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-855707    │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │                     │
	│ stop    │ -p old-k8s-version-855707 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-855707    │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │ 27 Dec 25 20:51 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-855707 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-855707    │ jenkins │ v1.37.0 │ 27 Dec 25 20:51 UTC │ 27 Dec 25 20:51 UTC │
	│ start   │ -p old-k8s-version-855707 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-855707    │ jenkins │ v1.37.0 │ 27 Dec 25 20:51 UTC │ 27 Dec 25 20:52 UTC │
	│ image   │ old-k8s-version-855707 image list --format=json                                                                                                                                                                                               │ old-k8s-version-855707    │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ pause   │ -p old-k8s-version-855707 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-855707    │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:51:07
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:51:07.787567  484456 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:51:07.787711  484456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:51:07.787724  484456 out.go:374] Setting ErrFile to fd 2...
	I1227 20:51:07.787743  484456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:51:07.788028  484456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:51:07.788408  484456 out.go:368] Setting JSON to false
	I1227 20:51:07.789247  484456 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9220,"bootTime":1766859448,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:51:07.789317  484456 start.go:143] virtualization:  
	I1227 20:51:07.792366  484456 out.go:179] * [old-k8s-version-855707] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:51:07.796252  484456 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:51:07.796340  484456 notify.go:221] Checking for updates...
	I1227 20:51:07.802229  484456 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:51:07.805168  484456 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:51:07.808082  484456 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:51:07.810884  484456 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:51:07.813655  484456 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:51:07.817052  484456 config.go:182] Loaded profile config "old-k8s-version-855707": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 20:51:07.820521  484456 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I1227 20:51:07.823378  484456 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:51:07.851963  484456 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:51:07.852076  484456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:51:07.908499  484456 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:51:07.899491428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:51:07.908598  484456 docker.go:319] overlay module found
	I1227 20:51:07.911734  484456 out.go:179] * Using the docker driver based on existing profile
	I1227 20:51:07.914620  484456 start.go:309] selected driver: docker
	I1227 20:51:07.914639  484456 start.go:928] validating driver "docker" against &{Name:old-k8s-version-855707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-855707 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:51:07.914742  484456 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:51:07.915463  484456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:51:07.970538  484456 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:51:07.961739252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:51:07.970917  484456 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:51:07.970947  484456 cni.go:84] Creating CNI manager for ""
	I1227 20:51:07.970997  484456 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:51:07.971039  484456 start.go:353] cluster config:
	{Name:old-k8s-version-855707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-855707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:51:07.974353  484456 out.go:179] * Starting "old-k8s-version-855707" primary control-plane node in "old-k8s-version-855707" cluster
	I1227 20:51:07.977239  484456 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:51:07.980169  484456 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:51:07.983008  484456 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 20:51:07.983052  484456 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:51:07.983061  484456 cache.go:65] Caching tarball of preloaded images
	I1227 20:51:07.983095  484456 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:51:07.983140  484456 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:51:07.983150  484456 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1227 20:51:07.983263  484456 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/config.json ...
	I1227 20:51:08.002169  484456 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:51:08.002193  484456 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:51:08.002210  484456 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:51:08.002242  484456 start.go:360] acquireMachinesLock for old-k8s-version-855707: {Name:mk772100ba05b793472926b85f6f775654e62c2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:51:08.002299  484456 start.go:364] duration metric: took 35.61µs to acquireMachinesLock for "old-k8s-version-855707"
	I1227 20:51:08.002324  484456 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:51:08.002335  484456 fix.go:54] fixHost starting: 
	I1227 20:51:08.002603  484456 cli_runner.go:164] Run: docker container inspect old-k8s-version-855707 --format={{.State.Status}}
	I1227 20:51:08.022395  484456 fix.go:112] recreateIfNeeded on old-k8s-version-855707: state=Stopped err=<nil>
	W1227 20:51:08.022430  484456 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:51:08.025691  484456 out.go:252] * Restarting existing docker container for "old-k8s-version-855707" ...
	I1227 20:51:08.025783  484456 cli_runner.go:164] Run: docker start old-k8s-version-855707
	I1227 20:51:08.291580  484456 cli_runner.go:164] Run: docker container inspect old-k8s-version-855707 --format={{.State.Status}}
	I1227 20:51:08.319843  484456 kic.go:430] container "old-k8s-version-855707" state is running.
	I1227 20:51:08.320218  484456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-855707
	I1227 20:51:08.338512  484456 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/config.json ...
	I1227 20:51:08.338765  484456 machine.go:94] provisionDockerMachine start ...
	I1227 20:51:08.338845  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:08.362110  484456 main.go:144] libmachine: Using SSH client type: native
	I1227 20:51:08.362485  484456 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1227 20:51:08.362504  484456 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:51:08.363077  484456 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39570->127.0.0.1:33413: read: connection reset by peer
	I1227 20:51:11.505496  484456 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-855707
	
	I1227 20:51:11.505520  484456 ubuntu.go:182] provisioning hostname "old-k8s-version-855707"
	I1227 20:51:11.505595  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:11.528876  484456 main.go:144] libmachine: Using SSH client type: native
	I1227 20:51:11.529190  484456 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1227 20:51:11.529208  484456 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-855707 && echo "old-k8s-version-855707" | sudo tee /etc/hostname
	I1227 20:51:11.678067  484456 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-855707
	
	I1227 20:51:11.678143  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:11.694480  484456 main.go:144] libmachine: Using SSH client type: native
	I1227 20:51:11.694817  484456 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1227 20:51:11.694840  484456 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-855707' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-855707/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-855707' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:51:11.833693  484456 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:51:11.833722  484456 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:51:11.833742  484456 ubuntu.go:190] setting up certificates
	I1227 20:51:11.833750  484456 provision.go:84] configureAuth start
	I1227 20:51:11.833811  484456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-855707
	I1227 20:51:11.850391  484456 provision.go:143] copyHostCerts
	I1227 20:51:11.850458  484456 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:51:11.850478  484456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:51:11.850564  484456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:51:11.850665  484456 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:51:11.850676  484456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:51:11.850704  484456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:51:11.850768  484456 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:51:11.850776  484456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:51:11.850801  484456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:51:11.850860  484456 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-855707 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-855707]
	I1227 20:51:12.347440  484456 provision.go:177] copyRemoteCerts
	I1227 20:51:12.347536  484456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:51:12.347608  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:12.364628  484456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:51:12.465143  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:51:12.482392  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1227 20:51:12.500330  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:51:12.517730  484456 provision.go:87] duration metric: took 683.955142ms to configureAuth
	I1227 20:51:12.517760  484456 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:51:12.517956  484456 config.go:182] Loaded profile config "old-k8s-version-855707": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 20:51:12.518064  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:12.536262  484456 main.go:144] libmachine: Using SSH client type: native
	I1227 20:51:12.536578  484456 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1227 20:51:12.536593  484456 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:51:12.869074  484456 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:51:12.869098  484456 machine.go:97] duration metric: took 4.530317273s to provisionDockerMachine
	I1227 20:51:12.869111  484456 start.go:293] postStartSetup for "old-k8s-version-855707" (driver="docker")
	I1227 20:51:12.869122  484456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:51:12.869188  484456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:51:12.869235  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:12.888239  484456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:51:12.990134  484456 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:51:12.993924  484456 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:51:12.993955  484456 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:51:12.993974  484456 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:51:12.994024  484456 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:51:12.994107  484456 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:51:12.994222  484456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:51:13.002108  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:51:13.023849  484456 start.go:296] duration metric: took 154.723437ms for postStartSetup
	I1227 20:51:13.023927  484456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:51:13.023986  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:13.046912  484456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:51:13.142554  484456 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:51:13.146895  484456 fix.go:56] duration metric: took 5.144551799s for fixHost
	I1227 20:51:13.146923  484456 start.go:83] releasing machines lock for "old-k8s-version-855707", held for 5.144610227s
	I1227 20:51:13.147006  484456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-855707
	I1227 20:51:13.163560  484456 ssh_runner.go:195] Run: cat /version.json
	I1227 20:51:13.163586  484456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:51:13.163611  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:13.163639  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:13.185855  484456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:51:13.195254  484456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:51:13.289089  484456 ssh_runner.go:195] Run: systemctl --version
	I1227 20:51:13.384704  484456 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:51:13.418933  484456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:51:13.423054  484456 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:51:13.423130  484456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:51:13.430400  484456 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:51:13.430462  484456 start.go:496] detecting cgroup driver to use...
	I1227 20:51:13.430507  484456 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:51:13.430564  484456 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:51:13.444722  484456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:51:13.457223  484456 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:51:13.457314  484456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:51:13.472381  484456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:51:13.484977  484456 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:51:13.594462  484456 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:51:13.720284  484456 docker.go:234] disabling docker service ...
	I1227 20:51:13.720375  484456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:51:13.738884  484456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:51:13.752235  484456 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:51:13.875247  484456 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:51:13.995477  484456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:51:14.010932  484456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:51:14.026627  484456 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1227 20:51:14.026798  484456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:51:14.035944  484456 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:51:14.036075  484456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:51:14.045701  484456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:51:14.054692  484456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:51:14.063605  484456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:51:14.071933  484456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:51:14.080835  484456 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:51:14.089342  484456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:51:14.098557  484456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:51:14.106216  484456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:51:14.113531  484456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:51:14.220312  484456 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:51:14.420615  484456 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:51:14.420716  484456 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:51:14.424380  484456 start.go:574] Will wait 60s for crictl version
	I1227 20:51:14.424441  484456 ssh_runner.go:195] Run: which crictl
	I1227 20:51:14.427769  484456 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:51:14.457343  484456 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:51:14.457520  484456 ssh_runner.go:195] Run: crio --version
	I1227 20:51:14.494169  484456 ssh_runner.go:195] Run: crio --version
	I1227 20:51:14.538429  484456 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1227 20:51:14.541393  484456 cli_runner.go:164] Run: docker network inspect old-k8s-version-855707 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:51:14.557661  484456 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 20:51:14.561585  484456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:51:14.572329  484456 kubeadm.go:884] updating cluster {Name:old-k8s-version-855707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-855707 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:51:14.572446  484456 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 20:51:14.572505  484456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:51:14.609876  484456 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:51:14.609901  484456 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:51:14.609962  484456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:51:14.636645  484456 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:51:14.636679  484456 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:51:14.636687  484456 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1227 20:51:14.636787  484456 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-855707 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-855707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:51:14.636871  484456 ssh_runner.go:195] Run: crio config
	I1227 20:51:14.689645  484456 cni.go:84] Creating CNI manager for ""
	I1227 20:51:14.689712  484456 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:51:14.689742  484456 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:51:14.689766  484456 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-855707 NodeName:old-k8s-version-855707 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:51:14.689958  484456 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-855707"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:51:14.690035  484456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1227 20:51:14.697839  484456 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:51:14.697918  484456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:51:14.705328  484456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1227 20:51:14.717985  484456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:51:14.730780  484456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1227 20:51:14.744273  484456 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:51:14.748309  484456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:51:14.758766  484456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:51:14.869968  484456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:51:14.885558  484456 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707 for IP: 192.168.76.2
	I1227 20:51:14.885577  484456 certs.go:195] generating shared ca certs ...
	I1227 20:51:14.885592  484456 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:51:14.885761  484456 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:51:14.885827  484456 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:51:14.885843  484456 certs.go:257] generating profile certs ...
	I1227 20:51:14.885947  484456 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.key
	I1227 20:51:14.886022  484456 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/apiserver.key.cdba09ac
	I1227 20:51:14.886077  484456 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/proxy-client.key
	I1227 20:51:14.886201  484456 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:51:14.886246  484456 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:51:14.886260  484456 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:51:14.886298  484456 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:51:14.886337  484456 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:51:14.886366  484456 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:51:14.886424  484456 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:51:14.892846  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:51:14.915757  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:51:14.935221  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:51:14.956646  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:51:14.976089  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1227 20:51:14.999382  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:51:15.034536  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:51:15.057796  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:51:15.085955  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:51:15.107663  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:51:15.128688  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:51:15.147814  484456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:51:15.162484  484456 ssh_runner.go:195] Run: openssl version
	I1227 20:51:15.168801  484456 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:51:15.176562  484456 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:51:15.184357  484456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:51:15.188362  484456 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:51:15.188441  484456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:51:15.232628  484456 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:51:15.240265  484456 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:51:15.247575  484456 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:51:15.255107  484456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:51:15.259595  484456 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:51:15.259711  484456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:51:15.301135  484456 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:51:15.308675  484456 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:51:15.315798  484456 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:51:15.323345  484456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:51:15.327101  484456 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:51:15.327196  484456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:51:15.367968  484456 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:51:15.375350  484456 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:51:15.379004  484456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:51:15.420213  484456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:51:15.493997  484456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:51:15.564213  484456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:51:15.646564  484456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:51:15.709902  484456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:51:15.781312  484456 kubeadm.go:401] StartCluster: {Name:old-k8s-version-855707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-855707 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:51:15.781477  484456 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:51:15.781594  484456 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:51:15.821824  484456 cri.go:96] found id: "7d4c3f3f4c978744c9c5787250f663c59c07f863a1314cc8b1c62aeb93bd69f7"
	I1227 20:51:15.821904  484456 cri.go:96] found id: "dbb47e0c12746be14a55cae562bfb8e8d54317017f8f82d613e967bb89746d7e"
	I1227 20:51:15.821932  484456 cri.go:96] found id: "1aee215b0f3720a59c0901866e7a32993b55fc4f9c1a946cd923bcfd33eef1dd"
	I1227 20:51:15.821950  484456 cri.go:96] found id: "f81dcf2a161529d4fcaa1a2ecd3b730d50a5f75c24fdaf286d568185a6ad7aad"
	I1227 20:51:15.821992  484456 cri.go:96] found id: ""
	I1227 20:51:15.822083  484456 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:51:15.833740  484456 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:51:15Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:51:15.833866  484456 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:51:15.851488  484456 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:51:15.851561  484456 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:51:15.851641  484456 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:51:15.861881  484456 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:51:15.862383  484456 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-855707" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:51:15.862544  484456 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-272475/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-855707" cluster setting kubeconfig missing "old-k8s-version-855707" context setting]
	I1227 20:51:15.862913  484456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:51:15.864550  484456 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:51:15.872046  484456 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 20:51:15.872125  484456 kubeadm.go:602] duration metric: took 20.531313ms to restartPrimaryControlPlane
	I1227 20:51:15.872157  484456 kubeadm.go:403] duration metric: took 90.848435ms to StartCluster
	I1227 20:51:15.872203  484456 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:51:15.872292  484456 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:51:15.873004  484456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:51:15.873272  484456 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:51:15.873726  484456 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:51:15.873801  484456 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-855707"
	I1227 20:51:15.873814  484456 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-855707"
	W1227 20:51:15.873820  484456 addons.go:248] addon storage-provisioner should already be in state true
	I1227 20:51:15.873842  484456 host.go:66] Checking if "old-k8s-version-855707" exists ...
	I1227 20:51:15.874558  484456 cli_runner.go:164] Run: docker container inspect old-k8s-version-855707 --format={{.State.Status}}
	I1227 20:51:15.874951  484456 config.go:182] Loaded profile config "old-k8s-version-855707": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 20:51:15.875084  484456 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-855707"
	I1227 20:51:15.875137  484456 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-855707"
	I1227 20:51:15.875448  484456 cli_runner.go:164] Run: docker container inspect old-k8s-version-855707 --format={{.State.Status}}
	I1227 20:51:15.875710  484456 addons.go:70] Setting dashboard=true in profile "old-k8s-version-855707"
	I1227 20:51:15.875748  484456 addons.go:239] Setting addon dashboard=true in "old-k8s-version-855707"
	W1227 20:51:15.875780  484456 addons.go:248] addon dashboard should already be in state true
	I1227 20:51:15.875836  484456 host.go:66] Checking if "old-k8s-version-855707" exists ...
	I1227 20:51:15.876320  484456 cli_runner.go:164] Run: docker container inspect old-k8s-version-855707 --format={{.State.Status}}
	I1227 20:51:15.885406  484456 out.go:179] * Verifying Kubernetes components...
	I1227 20:51:15.893575  484456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:51:15.923557  484456 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:51:15.927521  484456 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:51:15.927544  484456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:51:15.927613  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:15.930378  484456 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-855707"
	W1227 20:51:15.930401  484456 addons.go:248] addon default-storageclass should already be in state true
	I1227 20:51:15.930427  484456 host.go:66] Checking if "old-k8s-version-855707" exists ...
	I1227 20:51:15.930852  484456 cli_runner.go:164] Run: docker container inspect old-k8s-version-855707 --format={{.State.Status}}
	I1227 20:51:15.940721  484456 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 20:51:15.943655  484456 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 20:51:15.949504  484456 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 20:51:15.949535  484456 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 20:51:15.949610  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:15.998932  484456 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:51:15.998954  484456 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:51:15.999018  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:16.014162  484456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:51:16.029619  484456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:51:16.042688  484456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:51:16.264298  484456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:51:16.272736  484456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:51:16.299206  484456 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 20:51:16.299229  484456 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 20:51:16.366876  484456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:51:16.367651  484456 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:51:16.367703  484456 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:51:16.420114  484456 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:51:16.420183  484456 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:51:16.503172  484456 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:51:16.503255  484456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:51:16.587475  484456 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:51:16.587545  484456 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:51:16.642914  484456 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:51:16.642982  484456 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:51:16.665908  484456 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:51:16.665978  484456 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:51:16.684928  484456 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:51:16.685002  484456 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:51:16.709981  484456 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:51:16.710058  484456 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:51:16.736373  484456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:51:21.171667  484456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.907288517s)
	I1227 20:51:21.171771  484456 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.899012566s)
	I1227 20:51:21.171911  484456 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-855707" to be "Ready" ...
	I1227 20:51:21.171816  484456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.804855026s)
	I1227 20:51:21.202041  484456 node_ready.go:49] node "old-k8s-version-855707" is "Ready"
	I1227 20:51:21.202072  484456 node_ready.go:38] duration metric: took 30.044783ms for node "old-k8s-version-855707" to be "Ready" ...
	I1227 20:51:21.202087  484456 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:51:21.202160  484456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:51:21.690485  484456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.954013954s)
	I1227 20:51:21.690591  484456 api_server.go:72] duration metric: took 5.817262053s to wait for apiserver process to appear ...
	I1227 20:51:21.690800  484456 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:51:21.690818  484456 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:51:21.694869  484456 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-855707 addons enable metrics-server
	
	I1227 20:51:21.697851  484456 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1227 20:51:21.700867  484456 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 20:51:21.701360  484456 addons.go:530] duration metric: took 5.827636658s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1227 20:51:21.702343  484456 api_server.go:141] control plane version: v1.28.0
	I1227 20:51:21.702369  484456 api_server.go:131] duration metric: took 11.561467ms to wait for apiserver health ...
	I1227 20:51:21.702378  484456 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:51:21.706184  484456 system_pods.go:59] 8 kube-system pods found
	I1227 20:51:21.706221  484456 system_pods.go:61] "coredns-5dd5756b68-gpcrh" [a817d3e5-41a0-4029-8f3a-e902cf24169c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:51:21.706231  484456 system_pods.go:61] "etcd-old-k8s-version-855707" [347619f1-fd36-46b8-8280-533c8d8107e6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:51:21.706239  484456 system_pods.go:61] "kindnet-v9n7l" [20b6398f-382b-4304-beda-4f34e8e3a495] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:51:21.706247  484456 system_pods.go:61] "kube-apiserver-old-k8s-version-855707" [8bfd9b90-fbb5-4473-b9a4-572d9ccaa1c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:51:21.706269  484456 system_pods.go:61] "kube-controller-manager-old-k8s-version-855707" [2eff16b1-95f6-4af5-9aea-16016fbf3b59] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:51:21.706278  484456 system_pods.go:61] "kube-proxy-57s5h" [fac7868c-241d-4875-9eb7-976a578866b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:51:21.706289  484456 system_pods.go:61] "kube-scheduler-old-k8s-version-855707" [cd0966b8-145e-4e34-b566-9b91f269eaa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:51:21.706297  484456 system_pods.go:61] "storage-provisioner" [c3f69f3c-dfab-44a6-a2b9-a993044ed4ec] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:51:21.706307  484456 system_pods.go:74] duration metric: took 3.922938ms to wait for pod list to return data ...
	I1227 20:51:21.706315  484456 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:51:21.708844  484456 default_sa.go:45] found service account: "default"
	I1227 20:51:21.708866  484456 default_sa.go:55] duration metric: took 2.541948ms for default service account to be created ...
	I1227 20:51:21.708876  484456 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:51:21.712449  484456 system_pods.go:86] 8 kube-system pods found
	I1227 20:51:21.712521  484456 system_pods.go:89] "coredns-5dd5756b68-gpcrh" [a817d3e5-41a0-4029-8f3a-e902cf24169c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:51:21.712540  484456 system_pods.go:89] "etcd-old-k8s-version-855707" [347619f1-fd36-46b8-8280-533c8d8107e6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:51:21.712550  484456 system_pods.go:89] "kindnet-v9n7l" [20b6398f-382b-4304-beda-4f34e8e3a495] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:51:21.712557  484456 system_pods.go:89] "kube-apiserver-old-k8s-version-855707" [8bfd9b90-fbb5-4473-b9a4-572d9ccaa1c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:51:21.712565  484456 system_pods.go:89] "kube-controller-manager-old-k8s-version-855707" [2eff16b1-95f6-4af5-9aea-16016fbf3b59] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:51:21.712581  484456 system_pods.go:89] "kube-proxy-57s5h" [fac7868c-241d-4875-9eb7-976a578866b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:51:21.712594  484456 system_pods.go:89] "kube-scheduler-old-k8s-version-855707" [cd0966b8-145e-4e34-b566-9b91f269eaa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:51:21.712601  484456 system_pods.go:89] "storage-provisioner" [c3f69f3c-dfab-44a6-a2b9-a993044ed4ec] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:51:21.712608  484456 system_pods.go:126] duration metric: took 3.726743ms to wait for k8s-apps to be running ...
	I1227 20:51:21.712621  484456 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:51:21.712680  484456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:51:21.726818  484456 system_svc.go:56] duration metric: took 14.179409ms WaitForService to wait for kubelet
	I1227 20:51:21.726847  484456 kubeadm.go:587] duration metric: took 5.853517429s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:51:21.726868  484456 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:51:21.729738  484456 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:51:21.729770  484456 node_conditions.go:123] node cpu capacity is 2
	I1227 20:51:21.729791  484456 node_conditions.go:105] duration metric: took 2.917707ms to run NodePressure ...
	I1227 20:51:21.729804  484456 start.go:242] waiting for startup goroutines ...
	I1227 20:51:21.729820  484456 start.go:247] waiting for cluster config update ...
	I1227 20:51:21.729839  484456 start.go:256] writing updated cluster config ...
	I1227 20:51:21.730174  484456 ssh_runner.go:195] Run: rm -f paused
	I1227 20:51:21.733645  484456 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:51:21.738005  484456 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-gpcrh" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 20:51:23.748804  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:26.243673  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:28.743819  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:30.744141  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:32.753955  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:35.244452  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:37.245909  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:39.743692  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:41.746642  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:44.243450  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:46.244671  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:48.245308  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:50.755633  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:53.268770  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:55.754680  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:58.243430  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	I1227 20:51:59.244670  484456 pod_ready.go:94] pod "coredns-5dd5756b68-gpcrh" is "Ready"
	I1227 20:51:59.244701  484456 pod_ready.go:86] duration metric: took 37.506670083s for pod "coredns-5dd5756b68-gpcrh" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:51:59.248675  484456 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:51:59.253982  484456 pod_ready.go:94] pod "etcd-old-k8s-version-855707" is "Ready"
	I1227 20:51:59.254008  484456 pod_ready.go:86] duration metric: took 5.308983ms for pod "etcd-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:51:59.257023  484456 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:51:59.261758  484456 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-855707" is "Ready"
	I1227 20:51:59.261783  484456 pod_ready.go:86] duration metric: took 4.735691ms for pod "kube-apiserver-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:51:59.264672  484456 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:51:59.447152  484456 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-855707" is "Ready"
	I1227 20:51:59.447182  484456 pod_ready.go:86] duration metric: took 182.483797ms for pod "kube-controller-manager-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:51:59.643269  484456 pod_ready.go:83] waiting for pod "kube-proxy-57s5h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:52:00.042958  484456 pod_ready.go:94] pod "kube-proxy-57s5h" is "Ready"
	I1227 20:52:00.042985  484456 pod_ready.go:86] duration metric: took 399.68686ms for pod "kube-proxy-57s5h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:52:00.253818  484456 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:52:00.642359  484456 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-855707" is "Ready"
	I1227 20:52:00.642390  484456 pod_ready.go:86] duration metric: took 388.542447ms for pod "kube-scheduler-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:52:00.642404  484456 pod_ready.go:40] duration metric: took 38.90872564s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:52:00.693354  484456 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1227 20:52:00.696443  484456 out.go:203] 
	W1227 20:52:00.699282  484456 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1227 20:52:00.702179  484456 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:52:00.705033  484456 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-855707" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:51:56 old-k8s-version-855707 crio[653]: time="2025-12-27T20:51:56.0274854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:51:56 old-k8s-version-855707 crio[653]: time="2025-12-27T20:51:56.03488256Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:51:56 old-k8s-version-855707 crio[653]: time="2025-12-27T20:51:56.035492126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:51:56 old-k8s-version-855707 crio[653]: time="2025-12-27T20:51:56.05096892Z" level=info msg="Created container 9f13af8e3cf48801f0478b82b99ee66bcfc73622a69c8aac7a5a9bc87d6b8dad: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fgk6l/dashboard-metrics-scraper" id=f37e8f77-57d7-4bc8-adfb-f0d4beb3a94a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:51:56 old-k8s-version-855707 crio[653]: time="2025-12-27T20:51:56.052093122Z" level=info msg="Starting container: 9f13af8e3cf48801f0478b82b99ee66bcfc73622a69c8aac7a5a9bc87d6b8dad" id=7ea75a9f-10ae-49aa-9c9f-c42fd38d9ebe name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:51:56 old-k8s-version-855707 crio[653]: time="2025-12-27T20:51:56.054729796Z" level=info msg="Started container" PID=1646 containerID=9f13af8e3cf48801f0478b82b99ee66bcfc73622a69c8aac7a5a9bc87d6b8dad description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fgk6l/dashboard-metrics-scraper id=7ea75a9f-10ae-49aa-9c9f-c42fd38d9ebe name=/runtime.v1.RuntimeService/StartContainer sandboxID=f3fab93b2872fb46b81689be2167c5e13b65bd16dafe4483e28d388d9382d93c
	Dec 27 20:51:56 old-k8s-version-855707 conmon[1644]: conmon 9f13af8e3cf48801f047 <ninfo>: container 1646 exited with status 1
	Dec 27 20:51:56 old-k8s-version-855707 crio[653]: time="2025-12-27T20:51:56.239685149Z" level=info msg="Removing container: f34d240c73df632f12ba0708f523cf3e099946def9fe310317b9fe10cf92238c" id=9d64f2fe-6b22-4920-82c9-c794da26f941 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:51:56 old-k8s-version-855707 crio[653]: time="2025-12-27T20:51:56.254893816Z" level=info msg="Error loading conmon cgroup of container f34d240c73df632f12ba0708f523cf3e099946def9fe310317b9fe10cf92238c: cgroup deleted" id=9d64f2fe-6b22-4920-82c9-c794da26f941 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:51:56 old-k8s-version-855707 crio[653]: time="2025-12-27T20:51:56.260840819Z" level=info msg="Removed container f34d240c73df632f12ba0708f523cf3e099946def9fe310317b9fe10cf92238c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fgk6l/dashboard-metrics-scraper" id=9d64f2fe-6b22-4920-82c9-c794da26f941 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.444184883Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.44817511Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.448208718Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.448235531Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.451525093Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.451566782Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.451586925Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.456163958Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.456345901Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.456444787Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.459686867Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.459845451Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.459918253Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.463188313Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.463335238Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	9f13af8e3cf48       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   f3fab93b2872f       dashboard-metrics-scraper-5f989dc9cf-fgk6l       kubernetes-dashboard
	c896e123ec622       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago      Running             storage-provisioner         2                   b6b61e424fca3       storage-provisioner                              kube-system
	e9fac01bcc780       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago      Running             kubernetes-dashboard        0                   a9ccbd6764cd9       kubernetes-dashboard-8694d4445c-77hv8            kubernetes-dashboard
	472e9ef0638e8       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           53 seconds ago      Running             coredns                     1                   51e6ed3efc2b3       coredns-5dd5756b68-gpcrh                         kube-system
	990f851e3eb32       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago      Exited              storage-provisioner         1                   b6b61e424fca3       storage-provisioner                              kube-system
	72119f349b521       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago      Running             busybox                     1                   d4b1606caccc4       busybox                                          default
	b609c90a03360       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           53 seconds ago      Running             kindnet-cni                 1                   c9e86cefd5e15       kindnet-v9n7l                                    kube-system
	045ae702e454a       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           53 seconds ago      Running             kube-proxy                  1                   3d14930489ec2       kube-proxy-57s5h                                 kube-system
	7d4c3f3f4c978       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           59 seconds ago      Running             kube-scheduler              1                   900209e44d0f0       kube-scheduler-old-k8s-version-855707            kube-system
	dbb47e0c12746       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           59 seconds ago      Running             kube-controller-manager     1                   f76f7c54abedf       kube-controller-manager-old-k8s-version-855707   kube-system
	1aee215b0f372       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           59 seconds ago      Running             kube-apiserver              1                   c2b87e582a2d1       kube-apiserver-old-k8s-version-855707            kube-system
	f81dcf2a16152       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           59 seconds ago      Running             etcd                        1                   12468d7a4568b       etcd-old-k8s-version-855707                      kube-system
	
	
	==> coredns [472e9ef0638e8f491ece4877ddb32d0f7b578b7729e6009843e0f87891b75f03] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36192 - 36987 "HINFO IN 99020971225076295.2904537491910346420. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.010850843s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-855707
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-855707
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=old-k8s-version-855707
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_50_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:50:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-855707
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:52:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:51:50 +0000   Sat, 27 Dec 2025 20:50:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:51:50 +0000   Sat, 27 Dec 2025 20:50:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:51:50 +0000   Sat, 27 Dec 2025 20:50:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:51:50 +0000   Sat, 27 Dec 2025 20:50:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-855707
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                6f46b687-0cf7-4b64-a058-88b55b2f77d5
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-gpcrh                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     107s
	  kube-system                 etcd-old-k8s-version-855707                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-v9n7l                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-855707             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-855707    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-57s5h                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-855707             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-fgk6l        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-77hv8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 53s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-855707 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-855707 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-855707 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node old-k8s-version-855707 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node old-k8s-version-855707 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node old-k8s-version-855707 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s                 node-controller  Node old-k8s-version-855707 event: Registered Node old-k8s-version-855707 in Controller
	  Normal  NodeReady                94s                  kubelet          Node old-k8s-version-855707 status is now: NodeReady
	  Normal  Starting                 61s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)    kubelet          Node old-k8s-version-855707 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)    kubelet          Node old-k8s-version-855707 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)    kubelet          Node old-k8s-version-855707 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                  node-controller  Node old-k8s-version-855707 event: Registered Node old-k8s-version-855707 in Controller
	
	
	==> dmesg <==
	[Dec27 20:17] overlayfs: idmapped layers are currently not supported
	[Dec27 20:19] overlayfs: idmapped layers are currently not supported
	[ +36.244108] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 20:22] overlayfs: idmapped layers are currently not supported
	[Dec27 20:23] overlayfs: idmapped layers are currently not supported
	[Dec27 20:24] overlayfs: idmapped layers are currently not supported
	[Dec27 20:25] overlayfs: idmapped layers are currently not supported
	[ +35.447549] overlayfs: idmapped layers are currently not supported
	[Dec27 20:26] overlayfs: idmapped layers are currently not supported
	[Dec27 20:27] overlayfs: idmapped layers are currently not supported
	[  +6.770645] overlayfs: idmapped layers are currently not supported
	[Dec27 20:28] overlayfs: idmapped layers are currently not supported
	[ +25.872751] overlayfs: idmapped layers are currently not supported
	[Dec27 20:29] overlayfs: idmapped layers are currently not supported
	[ +32.997137] overlayfs: idmapped layers are currently not supported
	[Dec27 20:31] overlayfs: idmapped layers are currently not supported
	[Dec27 20:33] overlayfs: idmapped layers are currently not supported
	[ +33.772475] overlayfs: idmapped layers are currently not supported
	[Dec27 20:39] overlayfs: idmapped layers are currently not supported
	[Dec27 20:40] overlayfs: idmapped layers are currently not supported
	[Dec27 20:44] overlayfs: idmapped layers are currently not supported
	[Dec27 20:45] overlayfs: idmapped layers are currently not supported
	[Dec27 20:49] overlayfs: idmapped layers are currently not supported
	[Dec27 20:50] overlayfs: idmapped layers are currently not supported
	[Dec27 20:51] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f81dcf2a161529d4fcaa1a2ecd3b730d50a5f75c24fdaf286d568185a6ad7aad] <==
	{"level":"info","ts":"2025-12-27T20:51:15.808368Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:51:15.808377Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:51:15.808567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T20:51:15.808618Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-27T20:51:15.808696Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T20:51:15.808721Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T20:51:15.81616Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T20:51:15.816925Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T20:51:15.817062Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T20:51:15.816583Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T20:51:15.817166Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T20:51:17.393492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:51:17.393619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:51:17.393672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:51:17.393714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:51:17.393745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:51:17.393785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:51:17.393819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:51:17.395341Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-855707 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:51:17.395422Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:51:17.396562Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T20:51:17.395469Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:51:17.402399Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:51:17.425405Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:51:17.425536Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:52:15 up  2:34,  0 user,  load average: 1.40, 1.66, 1.84
	Linux old-k8s-version-855707 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b609c90a033605921a0c78fea186518f1183108bfde56e07abdd5ad07826877b] <==
	I1227 20:51:22.247254       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:51:22.250449       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 20:51:22.250695       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:51:22.251356       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:51:22.251427       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:51:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:51:22.442076       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:51:22.442780       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:51:22.442837       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:51:22.442961       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 20:51:52.442466       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 20:51:52.442506       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 20:51:52.442619       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 20:51:52.443819       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1227 20:51:54.043310       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:51:54.043347       1 metrics.go:72] Registering metrics
	I1227 20:51:54.043397       1 controller.go:711] "Syncing nftables rules"
	I1227 20:52:02.443811       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:52:02.443849       1 main.go:301] handling current node
	I1227 20:52:12.449581       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:52:12.449619       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1aee215b0f3720a59c0901866e7a32993b55fc4f9c1a946cd923bcfd33eef1dd] <==
	I1227 20:51:20.138535       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:51:20.143159       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1227 20:51:20.148623       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1227 20:51:20.148724       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1227 20:51:20.150254       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1227 20:51:20.151084       1 shared_informer.go:318] Caches are synced for configmaps
	I1227 20:51:20.151690       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1227 20:51:20.151814       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:51:20.172232       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1227 20:51:20.172395       1 aggregator.go:166] initial CRD sync complete...
	I1227 20:51:20.172430       1 autoregister_controller.go:141] Starting autoregister controller
	I1227 20:51:20.172460       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:51:20.172492       1 cache.go:39] Caches are synced for autoregister controller
	E1227 20:51:20.215581       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 20:51:20.756630       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1227 20:51:21.512117       1 controller.go:624] quota admission added evaluator for: namespaces
	I1227 20:51:21.555066       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1227 20:51:21.584303       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:51:21.595554       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:51:21.608139       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1227 20:51:21.661292       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.29.180"}
	I1227 20:51:21.681998       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.77.80"}
	I1227 20:51:32.844061       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:51:32.852252       1 controller.go:624] quota admission added evaluator for: endpoints
	I1227 20:51:32.897211       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [dbb47e0c12746be14a55cae562bfb8e8d54317017f8f82d613e967bb89746d7e] <==
	I1227 20:51:32.930734       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-fgk6l"
	I1227 20:51:32.953957       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.701575ms"
	I1227 20:51:32.954305       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="44.267961ms"
	I1227 20:51:32.957684       1 shared_informer.go:318] Caches are synced for resource quota
	I1227 20:51:32.957782       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1227 20:51:32.971883       1 shared_informer.go:318] Caches are synced for daemon sets
	I1227 20:51:32.984467       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="30.417082ms"
	I1227 20:51:32.989890       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="35.543219ms"
	I1227 20:51:32.993765       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="55.252µs"
	I1227 20:51:33.002658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="18.083732ms"
	I1227 20:51:33.002768       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.145µs"
	I1227 20:51:33.015346       1 shared_informer.go:318] Caches are synced for crt configmap
	I1227 20:51:33.050972       1 shared_informer.go:318] Caches are synced for resource quota
	I1227 20:51:33.373105       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 20:51:33.373158       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1227 20:51:33.407355       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 20:51:37.192056       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="41.55µs"
	I1227 20:51:38.201958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.724µs"
	I1227 20:51:39.204816       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.291µs"
	I1227 20:51:42.241824       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="21.250713ms"
	I1227 20:51:42.242041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="79.563µs"
	I1227 20:51:56.258308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65µs"
	I1227 20:51:58.849899       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.570881ms"
	I1227 20:51:58.850059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.429µs"
	I1227 20:52:03.267391       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.821µs"
	
	
	==> kube-proxy [045ae702e454a1e50b74425e5caee0ffdfc04f73a5b99e25e3bcab98cb86fbb5] <==
	I1227 20:51:22.276907       1 server_others.go:69] "Using iptables proxy"
	I1227 20:51:22.292007       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1227 20:51:22.316299       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:51:22.318456       1 server_others.go:152] "Using iptables Proxier"
	I1227 20:51:22.318505       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1227 20:51:22.318514       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1227 20:51:22.318539       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1227 20:51:22.318770       1 server.go:846] "Version info" version="v1.28.0"
	I1227 20:51:22.318787       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:51:22.320003       1 config.go:188] "Starting service config controller"
	I1227 20:51:22.321036       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1227 20:51:22.321085       1 config.go:97] "Starting endpoint slice config controller"
	I1227 20:51:22.321101       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1227 20:51:22.323371       1 config.go:315] "Starting node config controller"
	I1227 20:51:22.327202       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1227 20:51:22.327942       1 shared_informer.go:318] Caches are synced for node config
	I1227 20:51:22.421379       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1227 20:51:22.421386       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [7d4c3f3f4c978744c9c5787250f663c59c07f863a1314cc8b1c62aeb93bd69f7] <==
	W1227 20:51:20.062247       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1227 20:51:20.062263       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1227 20:51:20.062318       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1227 20:51:20.062332       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1227 20:51:20.062380       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1227 20:51:20.062395       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1227 20:51:20.062449       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1227 20:51:20.062463       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1227 20:51:20.062514       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1227 20:51:20.062529       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1227 20:51:20.062569       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1227 20:51:20.062585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1227 20:51:20.062622       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1227 20:51:20.062637       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1227 20:51:20.062673       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1227 20:51:20.062690       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1227 20:51:20.062740       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1227 20:51:20.062756       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1227 20:51:20.062798       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1227 20:51:20.062813       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1227 20:51:20.062850       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1227 20:51:20.062866       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1227 20:51:20.090892       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1227 20:51:20.090933       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1227 20:51:21.705795       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 27 20:51:33 old-k8s-version-855707 kubelet[786]: I1227 20:51:33.073008     786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnmwc\" (UniqueName: \"kubernetes.io/projected/decbcdbb-b3dc-4d09-acc2-6cd6d5cda634-kube-api-access-wnmwc\") pod \"dashboard-metrics-scraper-5f989dc9cf-fgk6l\" (UID: \"decbcdbb-b3dc-4d09-acc2-6cd6d5cda634\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fgk6l"
	Dec 27 20:51:33 old-k8s-version-855707 kubelet[786]: I1227 20:51:33.073044     786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a193555c-c195-4ba4-8eb9-c4c8e4a915df-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-77hv8\" (UID: \"a193555c-c195-4ba4-8eb9-c4c8e4a915df\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-77hv8"
	Dec 27 20:51:33 old-k8s-version-855707 kubelet[786]: I1227 20:51:33.073074     786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/decbcdbb-b3dc-4d09-acc2-6cd6d5cda634-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-fgk6l\" (UID: \"decbcdbb-b3dc-4d09-acc2-6cd6d5cda634\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fgk6l"
	Dec 27 20:51:33 old-k8s-version-855707 kubelet[786]: W1227 20:51:33.279580     786 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2/crio-f3fab93b2872fb46b81689be2167c5e13b65bd16dafe4483e28d388d9382d93c WatchSource:0}: Error finding container f3fab93b2872fb46b81689be2167c5e13b65bd16dafe4483e28d388d9382d93c: Status 404 returned error can't find the container with id f3fab93b2872fb46b81689be2167c5e13b65bd16dafe4483e28d388d9382d93c
	Dec 27 20:51:33 old-k8s-version-855707 kubelet[786]: W1227 20:51:33.292655     786 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2/crio-a9ccbd6764cd988eb181081d292ea75db3184866fd11c40c5badab32e483af9c WatchSource:0}: Error finding container a9ccbd6764cd988eb181081d292ea75db3184866fd11c40c5badab32e483af9c: Status 404 returned error can't find the container with id a9ccbd6764cd988eb181081d292ea75db3184866fd11c40c5badab32e483af9c
	Dec 27 20:51:37 old-k8s-version-855707 kubelet[786]: I1227 20:51:37.178279     786 scope.go:117] "RemoveContainer" containerID="486e38fd6e5049de2dda88072d6c48970e66be06078604221d640ef4c5e70476"
	Dec 27 20:51:38 old-k8s-version-855707 kubelet[786]: I1227 20:51:38.184659     786 scope.go:117] "RemoveContainer" containerID="486e38fd6e5049de2dda88072d6c48970e66be06078604221d640ef4c5e70476"
	Dec 27 20:51:38 old-k8s-version-855707 kubelet[786]: I1227 20:51:38.184969     786 scope.go:117] "RemoveContainer" containerID="f34d240c73df632f12ba0708f523cf3e099946def9fe310317b9fe10cf92238c"
	Dec 27 20:51:38 old-k8s-version-855707 kubelet[786]: E1227 20:51:38.185227     786 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fgk6l_kubernetes-dashboard(decbcdbb-b3dc-4d09-acc2-6cd6d5cda634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fgk6l" podUID="decbcdbb-b3dc-4d09-acc2-6cd6d5cda634"
	Dec 27 20:51:39 old-k8s-version-855707 kubelet[786]: I1227 20:51:39.189388     786 scope.go:117] "RemoveContainer" containerID="f34d240c73df632f12ba0708f523cf3e099946def9fe310317b9fe10cf92238c"
	Dec 27 20:51:39 old-k8s-version-855707 kubelet[786]: E1227 20:51:39.190176     786 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fgk6l_kubernetes-dashboard(decbcdbb-b3dc-4d09-acc2-6cd6d5cda634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fgk6l" podUID="decbcdbb-b3dc-4d09-acc2-6cd6d5cda634"
	Dec 27 20:51:43 old-k8s-version-855707 kubelet[786]: I1227 20:51:43.250947     786 scope.go:117] "RemoveContainer" containerID="f34d240c73df632f12ba0708f523cf3e099946def9fe310317b9fe10cf92238c"
	Dec 27 20:51:43 old-k8s-version-855707 kubelet[786]: E1227 20:51:43.251286     786 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fgk6l_kubernetes-dashboard(decbcdbb-b3dc-4d09-acc2-6cd6d5cda634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fgk6l" podUID="decbcdbb-b3dc-4d09-acc2-6cd6d5cda634"
	Dec 27 20:51:53 old-k8s-version-855707 kubelet[786]: I1227 20:51:53.226744     786 scope.go:117] "RemoveContainer" containerID="990f851e3eb3233bbb21418beecefa82d2748a136f67502e18f0e49d805ab852"
	Dec 27 20:51:53 old-k8s-version-855707 kubelet[786]: I1227 20:51:53.270007     786 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-77hv8" podStartSLOduration=13.288276278 podCreationTimestamp="2025-12-27 20:51:32 +0000 UTC" firstStartedPulling="2025-12-27 20:51:33.299155605 +0000 UTC m=+18.411611516" lastFinishedPulling="2025-12-27 20:51:41.280796317 +0000 UTC m=+26.393252236" observedRunningTime="2025-12-27 20:51:42.221141032 +0000 UTC m=+27.333596951" watchObservedRunningTime="2025-12-27 20:51:53.269916998 +0000 UTC m=+38.382372909"
	Dec 27 20:51:56 old-k8s-version-855707 kubelet[786]: I1227 20:51:56.023874     786 scope.go:117] "RemoveContainer" containerID="f34d240c73df632f12ba0708f523cf3e099946def9fe310317b9fe10cf92238c"
	Dec 27 20:51:56 old-k8s-version-855707 kubelet[786]: I1227 20:51:56.237780     786 scope.go:117] "RemoveContainer" containerID="f34d240c73df632f12ba0708f523cf3e099946def9fe310317b9fe10cf92238c"
	Dec 27 20:51:56 old-k8s-version-855707 kubelet[786]: I1227 20:51:56.238003     786 scope.go:117] "RemoveContainer" containerID="9f13af8e3cf48801f0478b82b99ee66bcfc73622a69c8aac7a5a9bc87d6b8dad"
	Dec 27 20:51:56 old-k8s-version-855707 kubelet[786]: E1227 20:51:56.238280     786 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fgk6l_kubernetes-dashboard(decbcdbb-b3dc-4d09-acc2-6cd6d5cda634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fgk6l" podUID="decbcdbb-b3dc-4d09-acc2-6cd6d5cda634"
	Dec 27 20:52:03 old-k8s-version-855707 kubelet[786]: I1227 20:52:03.251356     786 scope.go:117] "RemoveContainer" containerID="9f13af8e3cf48801f0478b82b99ee66bcfc73622a69c8aac7a5a9bc87d6b8dad"
	Dec 27 20:52:03 old-k8s-version-855707 kubelet[786]: E1227 20:52:03.252227     786 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fgk6l_kubernetes-dashboard(decbcdbb-b3dc-4d09-acc2-6cd6d5cda634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fgk6l" podUID="decbcdbb-b3dc-4d09-acc2-6cd6d5cda634"
	Dec 27 20:52:12 old-k8s-version-855707 kubelet[786]: I1227 20:52:12.944697     786 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 27 20:52:12 old-k8s-version-855707 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:52:12 old-k8s-version-855707 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:52:12 old-k8s-version-855707 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e9fac01bcc780f0a4cbad6be58cc6198eff0b31d06a677ed0563d738670e0d3f] <==
	2025/12/27 20:51:41 Using namespace: kubernetes-dashboard
	2025/12/27 20:51:41 Using in-cluster config to connect to apiserver
	2025/12/27 20:51:41 Using secret token for csrf signing
	2025/12/27 20:51:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 20:51:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 20:51:41 Successful initial request to the apiserver, version: v1.28.0
	2025/12/27 20:51:41 Generating JWE encryption key
	2025/12/27 20:51:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 20:51:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 20:51:42 Initializing JWE encryption key from synchronized object
	2025/12/27 20:51:42 Creating in-cluster Sidecar client
	2025/12/27 20:51:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:51:42 Serving insecurely on HTTP port: 9090
	2025/12/27 20:52:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:51:41 Starting overwatch
	
	
	==> storage-provisioner [990f851e3eb3233bbb21418beecefa82d2748a136f67502e18f0e49d805ab852] <==
	I1227 20:51:22.256338       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 20:51:52.258614       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c896e123ec6228b1d4e8e675ab96d63527b576dc221b62eaba55cf1b039bbaab] <==
	I1227 20:51:53.279189       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 20:51:53.296103       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:51:53.296207       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1227 20:52:10.693187       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:52:10.693514       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-855707_b04fae04-420c-43de-bf55-797a5381f59e!
	I1227 20:52:10.694171       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c3590326-14f9-4148-8efd-a85b09a3c11f", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-855707_b04fae04-420c-43de-bf55-797a5381f59e became leader
	I1227 20:52:10.794293       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-855707_b04fae04-420c-43de-bf55-797a5381f59e!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-855707 -n old-k8s-version-855707
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-855707 -n old-k8s-version-855707: exit status 2 (374.028607ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-855707 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-855707
helpers_test.go:244: (dbg) docker inspect old-k8s-version-855707:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2",
	        "Created": "2025-12-27T20:49:49.083982112Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 484582,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:51:08.05684373Z",
	            "FinishedAt": "2025-12-27T20:51:07.268796901Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2/hostname",
	        "HostsPath": "/var/lib/docker/containers/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2/hosts",
	        "LogPath": "/var/lib/docker/containers/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2-json.log",
	        "Name": "/old-k8s-version-855707",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-855707:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-855707",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2",
	                "LowerDir": "/var/lib/docker/overlay2/4cfbcea77308f009a9856e9df5c3a29b9bfd669c592158a020d3e7751dc1e39a-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4cfbcea77308f009a9856e9df5c3a29b9bfd669c592158a020d3e7751dc1e39a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4cfbcea77308f009a9856e9df5c3a29b9bfd669c592158a020d3e7751dc1e39a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4cfbcea77308f009a9856e9df5c3a29b9bfd669c592158a020d3e7751dc1e39a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-855707",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-855707/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-855707",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-855707",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-855707",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ddd82653966951316be61d0975c9ce25540a7124177cce3580b840e086388eac",
	            "SandboxKey": "/var/run/docker/netns/ddd826539669",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33414"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33417"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-855707": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:ce:6e:6e:d5:43",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "30f18a4a5fe47b52fd514e9e7c68df45288c84a3f84ad77d2d2746ff085abb75",
	                    "EndpointID": "d851d0916c80ba1480c2eb386e6fa4bb51e709fa7ce25c24cc5c624b2af0cb11",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-855707",
	                        "ffdc66f60c1f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-855707 -n old-k8s-version-855707
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-855707 -n old-k8s-version-855707: exit status 2 (335.310365ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-855707 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-855707 logs -n 25: (1.236882583s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-037975 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo containerd config dump                                                                                                                                                                                                  │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo crio config                                                                                                                                                                                                             │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ delete  │ -p cilium-037975                                                                                                                                                                                                                              │ cilium-037975             │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │ 27 Dec 25 20:45 UTC │
	│ start   │ -p cert-expiration-629954 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-629954    │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │ 27 Dec 25 20:45 UTC │
	│ start   │ -p cert-expiration-629954 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-629954    │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │ 27 Dec 25 20:48 UTC │
	│ delete  │ -p cert-expiration-629954                                                                                                                                                                                                                     │ cert-expiration-629954    │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │ 27 Dec 25 20:48 UTC │
	│ start   │ -p force-systemd-flag-604544 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-604544 │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │                     │
	│ delete  │ -p force-systemd-env-859716                                                                                                                                                                                                                   │ force-systemd-env-859716  │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ start   │ -p cert-options-765175 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-765175       │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ ssh     │ cert-options-765175 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-765175       │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ ssh     │ -p cert-options-765175 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-765175       │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ delete  │ -p cert-options-765175                                                                                                                                                                                                                        │ cert-options-765175       │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ start   │ -p old-k8s-version-855707 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-855707    │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-855707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-855707    │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │                     │
	│ stop    │ -p old-k8s-version-855707 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-855707    │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │ 27 Dec 25 20:51 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-855707 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-855707    │ jenkins │ v1.37.0 │ 27 Dec 25 20:51 UTC │ 27 Dec 25 20:51 UTC │
	│ start   │ -p old-k8s-version-855707 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-855707    │ jenkins │ v1.37.0 │ 27 Dec 25 20:51 UTC │ 27 Dec 25 20:52 UTC │
	│ image   │ old-k8s-version-855707 image list --format=json                                                                                                                                                                                               │ old-k8s-version-855707    │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ pause   │ -p old-k8s-version-855707 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-855707    │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:51:07
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:51:07.787567  484456 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:51:07.787711  484456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:51:07.787724  484456 out.go:374] Setting ErrFile to fd 2...
	I1227 20:51:07.787743  484456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:51:07.788028  484456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:51:07.788408  484456 out.go:368] Setting JSON to false
	I1227 20:51:07.789247  484456 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9220,"bootTime":1766859448,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:51:07.789317  484456 start.go:143] virtualization:  
	I1227 20:51:07.792366  484456 out.go:179] * [old-k8s-version-855707] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:51:07.796252  484456 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:51:07.796340  484456 notify.go:221] Checking for updates...
	I1227 20:51:07.802229  484456 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:51:07.805168  484456 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:51:07.808082  484456 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:51:07.810884  484456 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:51:07.813655  484456 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:51:07.817052  484456 config.go:182] Loaded profile config "old-k8s-version-855707": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 20:51:07.820521  484456 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I1227 20:51:07.823378  484456 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:51:07.851963  484456 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:51:07.852076  484456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:51:07.908499  484456 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:51:07.899491428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:51:07.908598  484456 docker.go:319] overlay module found
	I1227 20:51:07.911734  484456 out.go:179] * Using the docker driver based on existing profile
	I1227 20:51:07.914620  484456 start.go:309] selected driver: docker
	I1227 20:51:07.914639  484456 start.go:928] validating driver "docker" against &{Name:old-k8s-version-855707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-855707 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:51:07.914742  484456 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:51:07.915463  484456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:51:07.970538  484456 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:51:07.961739252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:51:07.970917  484456 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:51:07.970947  484456 cni.go:84] Creating CNI manager for ""
	I1227 20:51:07.970997  484456 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:51:07.971039  484456 start.go:353] cluster config:
	{Name:old-k8s-version-855707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-855707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:51:07.974353  484456 out.go:179] * Starting "old-k8s-version-855707" primary control-plane node in "old-k8s-version-855707" cluster
	I1227 20:51:07.977239  484456 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:51:07.980169  484456 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:51:07.983008  484456 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 20:51:07.983052  484456 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:51:07.983061  484456 cache.go:65] Caching tarball of preloaded images
	I1227 20:51:07.983095  484456 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:51:07.983140  484456 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:51:07.983150  484456 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1227 20:51:07.983263  484456 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/config.json ...
	I1227 20:51:08.002169  484456 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:51:08.002193  484456 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:51:08.002210  484456 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:51:08.002242  484456 start.go:360] acquireMachinesLock for old-k8s-version-855707: {Name:mk772100ba05b793472926b85f6f775654e62c2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:51:08.002299  484456 start.go:364] duration metric: took 35.61µs to acquireMachinesLock for "old-k8s-version-855707"
	I1227 20:51:08.002324  484456 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:51:08.002335  484456 fix.go:54] fixHost starting: 
	I1227 20:51:08.002603  484456 cli_runner.go:164] Run: docker container inspect old-k8s-version-855707 --format={{.State.Status}}
	I1227 20:51:08.022395  484456 fix.go:112] recreateIfNeeded on old-k8s-version-855707: state=Stopped err=<nil>
	W1227 20:51:08.022430  484456 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:51:08.025691  484456 out.go:252] * Restarting existing docker container for "old-k8s-version-855707" ...
	I1227 20:51:08.025783  484456 cli_runner.go:164] Run: docker start old-k8s-version-855707
	I1227 20:51:08.291580  484456 cli_runner.go:164] Run: docker container inspect old-k8s-version-855707 --format={{.State.Status}}
	I1227 20:51:08.319843  484456 kic.go:430] container "old-k8s-version-855707" state is running.
	I1227 20:51:08.320218  484456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-855707
	I1227 20:51:08.338512  484456 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/config.json ...
	I1227 20:51:08.338765  484456 machine.go:94] provisionDockerMachine start ...
	I1227 20:51:08.338845  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:08.362110  484456 main.go:144] libmachine: Using SSH client type: native
	I1227 20:51:08.362485  484456 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1227 20:51:08.362504  484456 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:51:08.363077  484456 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39570->127.0.0.1:33413: read: connection reset by peer
	I1227 20:51:11.505496  484456 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-855707
	
	I1227 20:51:11.505520  484456 ubuntu.go:182] provisioning hostname "old-k8s-version-855707"
	I1227 20:51:11.505595  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:11.528876  484456 main.go:144] libmachine: Using SSH client type: native
	I1227 20:51:11.529190  484456 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1227 20:51:11.529208  484456 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-855707 && echo "old-k8s-version-855707" | sudo tee /etc/hostname
	I1227 20:51:11.678067  484456 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-855707
	
	I1227 20:51:11.678143  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:11.694480  484456 main.go:144] libmachine: Using SSH client type: native
	I1227 20:51:11.694817  484456 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1227 20:51:11.694840  484456 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-855707' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-855707/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-855707' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:51:11.833693  484456 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:51:11.833722  484456 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:51:11.833742  484456 ubuntu.go:190] setting up certificates
	I1227 20:51:11.833750  484456 provision.go:84] configureAuth start
	I1227 20:51:11.833811  484456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-855707
	I1227 20:51:11.850391  484456 provision.go:143] copyHostCerts
	I1227 20:51:11.850458  484456 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:51:11.850478  484456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:51:11.850564  484456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:51:11.850665  484456 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:51:11.850676  484456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:51:11.850704  484456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:51:11.850768  484456 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:51:11.850776  484456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:51:11.850801  484456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:51:11.850860  484456 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-855707 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-855707]
	I1227 20:51:12.347440  484456 provision.go:177] copyRemoteCerts
	I1227 20:51:12.347536  484456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:51:12.347608  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:12.364628  484456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:51:12.465143  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:51:12.482392  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1227 20:51:12.500330  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:51:12.517730  484456 provision.go:87] duration metric: took 683.955142ms to configureAuth
	I1227 20:51:12.517760  484456 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:51:12.517956  484456 config.go:182] Loaded profile config "old-k8s-version-855707": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 20:51:12.518064  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:12.536262  484456 main.go:144] libmachine: Using SSH client type: native
	I1227 20:51:12.536578  484456 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1227 20:51:12.536593  484456 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:51:12.869074  484456 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:51:12.869098  484456 machine.go:97] duration metric: took 4.530317273s to provisionDockerMachine
	I1227 20:51:12.869111  484456 start.go:293] postStartSetup for "old-k8s-version-855707" (driver="docker")
	I1227 20:51:12.869122  484456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:51:12.869188  484456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:51:12.869235  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:12.888239  484456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:51:12.990134  484456 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:51:12.993924  484456 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:51:12.993955  484456 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:51:12.993974  484456 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:51:12.994024  484456 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:51:12.994107  484456 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:51:12.994222  484456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:51:13.002108  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:51:13.023849  484456 start.go:296] duration metric: took 154.723437ms for postStartSetup
	I1227 20:51:13.023927  484456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:51:13.023986  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:13.046912  484456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:51:13.142554  484456 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:51:13.146895  484456 fix.go:56] duration metric: took 5.144551799s for fixHost
	I1227 20:51:13.146923  484456 start.go:83] releasing machines lock for "old-k8s-version-855707", held for 5.144610227s
	I1227 20:51:13.147006  484456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-855707
	I1227 20:51:13.163560  484456 ssh_runner.go:195] Run: cat /version.json
	I1227 20:51:13.163586  484456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:51:13.163611  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:13.163639  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:13.185855  484456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:51:13.195254  484456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:51:13.289089  484456 ssh_runner.go:195] Run: systemctl --version
	I1227 20:51:13.384704  484456 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:51:13.418933  484456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:51:13.423054  484456 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:51:13.423130  484456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:51:13.430400  484456 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:51:13.430462  484456 start.go:496] detecting cgroup driver to use...
	I1227 20:51:13.430507  484456 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:51:13.430564  484456 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:51:13.444722  484456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:51:13.457223  484456 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:51:13.457314  484456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:51:13.472381  484456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:51:13.484977  484456 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:51:13.594462  484456 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:51:13.720284  484456 docker.go:234] disabling docker service ...
	I1227 20:51:13.720375  484456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:51:13.738884  484456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:51:13.752235  484456 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:51:13.875247  484456 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:51:13.995477  484456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:51:14.010932  484456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:51:14.026627  484456 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1227 20:51:14.026798  484456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:51:14.035944  484456 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:51:14.036075  484456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:51:14.045701  484456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:51:14.054692  484456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:51:14.063605  484456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:51:14.071933  484456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:51:14.080835  484456 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:51:14.089342  484456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:51:14.098557  484456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:51:14.106216  484456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:51:14.113531  484456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:51:14.220312  484456 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:51:14.420615  484456 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:51:14.420716  484456 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:51:14.424380  484456 start.go:574] Will wait 60s for crictl version
	I1227 20:51:14.424441  484456 ssh_runner.go:195] Run: which crictl
	I1227 20:51:14.427769  484456 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:51:14.457343  484456 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:51:14.457520  484456 ssh_runner.go:195] Run: crio --version
	I1227 20:51:14.494169  484456 ssh_runner.go:195] Run: crio --version
	I1227 20:51:14.538429  484456 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1227 20:51:14.541393  484456 cli_runner.go:164] Run: docker network inspect old-k8s-version-855707 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:51:14.557661  484456 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 20:51:14.561585  484456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:51:14.572329  484456 kubeadm.go:884] updating cluster {Name:old-k8s-version-855707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-855707 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:51:14.572446  484456 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 20:51:14.572505  484456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:51:14.609876  484456 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:51:14.609901  484456 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:51:14.609962  484456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:51:14.636645  484456 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:51:14.636679  484456 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:51:14.636687  484456 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1227 20:51:14.636787  484456 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-855707 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-855707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:51:14.636871  484456 ssh_runner.go:195] Run: crio config
	I1227 20:51:14.689645  484456 cni.go:84] Creating CNI manager for ""
	I1227 20:51:14.689712  484456 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:51:14.689742  484456 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:51:14.689766  484456 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-855707 NodeName:old-k8s-version-855707 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:51:14.689958  484456 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-855707"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:51:14.690035  484456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1227 20:51:14.697839  484456 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:51:14.697918  484456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:51:14.705328  484456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1227 20:51:14.717985  484456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:51:14.730780  484456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1227 20:51:14.744273  484456 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:51:14.748309  484456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:51:14.758766  484456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:51:14.869968  484456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:51:14.885558  484456 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707 for IP: 192.168.76.2
	I1227 20:51:14.885577  484456 certs.go:195] generating shared ca certs ...
	I1227 20:51:14.885592  484456 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:51:14.885761  484456 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:51:14.885827  484456 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:51:14.885843  484456 certs.go:257] generating profile certs ...
	I1227 20:51:14.885947  484456 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.key
	I1227 20:51:14.886022  484456 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/apiserver.key.cdba09ac
	I1227 20:51:14.886077  484456 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/proxy-client.key
	I1227 20:51:14.886201  484456 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:51:14.886246  484456 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:51:14.886260  484456 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:51:14.886298  484456 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:51:14.886337  484456 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:51:14.886366  484456 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:51:14.886424  484456 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:51:14.892846  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:51:14.915757  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:51:14.935221  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:51:14.956646  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:51:14.976089  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1227 20:51:14.999382  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:51:15.034536  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:51:15.057796  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:51:15.085955  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:51:15.107663  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:51:15.128688  484456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:51:15.147814  484456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:51:15.162484  484456 ssh_runner.go:195] Run: openssl version
	I1227 20:51:15.168801  484456 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:51:15.176562  484456 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:51:15.184357  484456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:51:15.188362  484456 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:51:15.188441  484456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:51:15.232628  484456 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:51:15.240265  484456 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:51:15.247575  484456 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:51:15.255107  484456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:51:15.259595  484456 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:51:15.259711  484456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:51:15.301135  484456 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:51:15.308675  484456 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:51:15.315798  484456 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:51:15.323345  484456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:51:15.327101  484456 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:51:15.327196  484456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:51:15.367968  484456 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:51:15.375350  484456 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:51:15.379004  484456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:51:15.420213  484456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:51:15.493997  484456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:51:15.564213  484456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:51:15.646564  484456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:51:15.709902  484456 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:51:15.781312  484456 kubeadm.go:401] StartCluster: {Name:old-k8s-version-855707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-855707 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:51:15.781477  484456 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:51:15.781594  484456 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:51:15.821824  484456 cri.go:96] found id: "7d4c3f3f4c978744c9c5787250f663c59c07f863a1314cc8b1c62aeb93bd69f7"
	I1227 20:51:15.821904  484456 cri.go:96] found id: "dbb47e0c12746be14a55cae562bfb8e8d54317017f8f82d613e967bb89746d7e"
	I1227 20:51:15.821932  484456 cri.go:96] found id: "1aee215b0f3720a59c0901866e7a32993b55fc4f9c1a946cd923bcfd33eef1dd"
	I1227 20:51:15.821950  484456 cri.go:96] found id: "f81dcf2a161529d4fcaa1a2ecd3b730d50a5f75c24fdaf286d568185a6ad7aad"
	I1227 20:51:15.821992  484456 cri.go:96] found id: ""
	I1227 20:51:15.822083  484456 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:51:15.833740  484456 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:51:15Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:51:15.833866  484456 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:51:15.851488  484456 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:51:15.851561  484456 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:51:15.851641  484456 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:51:15.861881  484456 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:51:15.862383  484456 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-855707" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:51:15.862544  484456 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-272475/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-855707" cluster setting kubeconfig missing "old-k8s-version-855707" context setting]
	I1227 20:51:15.862913  484456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:51:15.864550  484456 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:51:15.872046  484456 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 20:51:15.872125  484456 kubeadm.go:602] duration metric: took 20.531313ms to restartPrimaryControlPlane
	I1227 20:51:15.872157  484456 kubeadm.go:403] duration metric: took 90.848435ms to StartCluster
	I1227 20:51:15.872203  484456 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:51:15.872292  484456 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:51:15.873004  484456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:51:15.873272  484456 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:51:15.873726  484456 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:51:15.873801  484456 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-855707"
	I1227 20:51:15.873814  484456 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-855707"
	W1227 20:51:15.873820  484456 addons.go:248] addon storage-provisioner should already be in state true
	I1227 20:51:15.873842  484456 host.go:66] Checking if "old-k8s-version-855707" exists ...
	I1227 20:51:15.874558  484456 cli_runner.go:164] Run: docker container inspect old-k8s-version-855707 --format={{.State.Status}}
	I1227 20:51:15.874951  484456 config.go:182] Loaded profile config "old-k8s-version-855707": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 20:51:15.875084  484456 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-855707"
	I1227 20:51:15.875137  484456 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-855707"
	I1227 20:51:15.875448  484456 cli_runner.go:164] Run: docker container inspect old-k8s-version-855707 --format={{.State.Status}}
	I1227 20:51:15.875710  484456 addons.go:70] Setting dashboard=true in profile "old-k8s-version-855707"
	I1227 20:51:15.875748  484456 addons.go:239] Setting addon dashboard=true in "old-k8s-version-855707"
	W1227 20:51:15.875780  484456 addons.go:248] addon dashboard should already be in state true
	I1227 20:51:15.875836  484456 host.go:66] Checking if "old-k8s-version-855707" exists ...
	I1227 20:51:15.876320  484456 cli_runner.go:164] Run: docker container inspect old-k8s-version-855707 --format={{.State.Status}}
	I1227 20:51:15.885406  484456 out.go:179] * Verifying Kubernetes components...
	I1227 20:51:15.893575  484456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:51:15.923557  484456 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:51:15.927521  484456 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:51:15.927544  484456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:51:15.927613  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:15.930378  484456 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-855707"
	W1227 20:51:15.930401  484456 addons.go:248] addon default-storageclass should already be in state true
	I1227 20:51:15.930427  484456 host.go:66] Checking if "old-k8s-version-855707" exists ...
	I1227 20:51:15.930852  484456 cli_runner.go:164] Run: docker container inspect old-k8s-version-855707 --format={{.State.Status}}
	I1227 20:51:15.940721  484456 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 20:51:15.943655  484456 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 20:51:15.949504  484456 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 20:51:15.949535  484456 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 20:51:15.949610  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:15.998932  484456 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:51:15.998954  484456 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:51:15.999018  484456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-855707
	I1227 20:51:16.014162  484456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:51:16.029619  484456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:51:16.042688  484456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/old-k8s-version-855707/id_rsa Username:docker}
	I1227 20:51:16.264298  484456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:51:16.272736  484456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:51:16.299206  484456 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 20:51:16.299229  484456 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 20:51:16.366876  484456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:51:16.367651  484456 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:51:16.367703  484456 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:51:16.420114  484456 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:51:16.420183  484456 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:51:16.503172  484456 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:51:16.503255  484456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:51:16.587475  484456 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:51:16.587545  484456 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:51:16.642914  484456 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:51:16.642982  484456 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:51:16.665908  484456 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:51:16.665978  484456 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:51:16.684928  484456 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:51:16.685002  484456 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:51:16.709981  484456 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:51:16.710058  484456 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:51:16.736373  484456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:51:21.171667  484456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.907288517s)
	I1227 20:51:21.171771  484456 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.899012566s)
	I1227 20:51:21.171911  484456 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-855707" to be "Ready" ...
	I1227 20:51:21.171816  484456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.804855026s)
	I1227 20:51:21.202041  484456 node_ready.go:49] node "old-k8s-version-855707" is "Ready"
	I1227 20:51:21.202072  484456 node_ready.go:38] duration metric: took 30.044783ms for node "old-k8s-version-855707" to be "Ready" ...
	I1227 20:51:21.202087  484456 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:51:21.202160  484456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:51:21.690485  484456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.954013954s)
	I1227 20:51:21.690591  484456 api_server.go:72] duration metric: took 5.817262053s to wait for apiserver process to appear ...
	I1227 20:51:21.690800  484456 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:51:21.690818  484456 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:51:21.694869  484456 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-855707 addons enable metrics-server
	
	I1227 20:51:21.697851  484456 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1227 20:51:21.700867  484456 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 20:51:21.701360  484456 addons.go:530] duration metric: took 5.827636658s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1227 20:51:21.702343  484456 api_server.go:141] control plane version: v1.28.0
	I1227 20:51:21.702369  484456 api_server.go:131] duration metric: took 11.561467ms to wait for apiserver health ...
	I1227 20:51:21.702378  484456 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:51:21.706184  484456 system_pods.go:59] 8 kube-system pods found
	I1227 20:51:21.706221  484456 system_pods.go:61] "coredns-5dd5756b68-gpcrh" [a817d3e5-41a0-4029-8f3a-e902cf24169c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:51:21.706231  484456 system_pods.go:61] "etcd-old-k8s-version-855707" [347619f1-fd36-46b8-8280-533c8d8107e6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:51:21.706239  484456 system_pods.go:61] "kindnet-v9n7l" [20b6398f-382b-4304-beda-4f34e8e3a495] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:51:21.706247  484456 system_pods.go:61] "kube-apiserver-old-k8s-version-855707" [8bfd9b90-fbb5-4473-b9a4-572d9ccaa1c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:51:21.706269  484456 system_pods.go:61] "kube-controller-manager-old-k8s-version-855707" [2eff16b1-95f6-4af5-9aea-16016fbf3b59] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:51:21.706278  484456 system_pods.go:61] "kube-proxy-57s5h" [fac7868c-241d-4875-9eb7-976a578866b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:51:21.706289  484456 system_pods.go:61] "kube-scheduler-old-k8s-version-855707" [cd0966b8-145e-4e34-b566-9b91f269eaa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:51:21.706297  484456 system_pods.go:61] "storage-provisioner" [c3f69f3c-dfab-44a6-a2b9-a993044ed4ec] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:51:21.706307  484456 system_pods.go:74] duration metric: took 3.922938ms to wait for pod list to return data ...
	I1227 20:51:21.706315  484456 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:51:21.708844  484456 default_sa.go:45] found service account: "default"
	I1227 20:51:21.708866  484456 default_sa.go:55] duration metric: took 2.541948ms for default service account to be created ...
	I1227 20:51:21.708876  484456 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:51:21.712449  484456 system_pods.go:86] 8 kube-system pods found
	I1227 20:51:21.712521  484456 system_pods.go:89] "coredns-5dd5756b68-gpcrh" [a817d3e5-41a0-4029-8f3a-e902cf24169c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:51:21.712540  484456 system_pods.go:89] "etcd-old-k8s-version-855707" [347619f1-fd36-46b8-8280-533c8d8107e6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:51:21.712550  484456 system_pods.go:89] "kindnet-v9n7l" [20b6398f-382b-4304-beda-4f34e8e3a495] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:51:21.712557  484456 system_pods.go:89] "kube-apiserver-old-k8s-version-855707" [8bfd9b90-fbb5-4473-b9a4-572d9ccaa1c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:51:21.712565  484456 system_pods.go:89] "kube-controller-manager-old-k8s-version-855707" [2eff16b1-95f6-4af5-9aea-16016fbf3b59] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:51:21.712581  484456 system_pods.go:89] "kube-proxy-57s5h" [fac7868c-241d-4875-9eb7-976a578866b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:51:21.712594  484456 system_pods.go:89] "kube-scheduler-old-k8s-version-855707" [cd0966b8-145e-4e34-b566-9b91f269eaa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:51:21.712601  484456 system_pods.go:89] "storage-provisioner" [c3f69f3c-dfab-44a6-a2b9-a993044ed4ec] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:51:21.712608  484456 system_pods.go:126] duration metric: took 3.726743ms to wait for k8s-apps to be running ...
	I1227 20:51:21.712621  484456 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:51:21.712680  484456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:51:21.726818  484456 system_svc.go:56] duration metric: took 14.179409ms WaitForService to wait for kubelet
	I1227 20:51:21.726847  484456 kubeadm.go:587] duration metric: took 5.853517429s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:51:21.726868  484456 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:51:21.729738  484456 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:51:21.729770  484456 node_conditions.go:123] node cpu capacity is 2
	I1227 20:51:21.729791  484456 node_conditions.go:105] duration metric: took 2.917707ms to run NodePressure ...
	I1227 20:51:21.729804  484456 start.go:242] waiting for startup goroutines ...
	I1227 20:51:21.729820  484456 start.go:247] waiting for cluster config update ...
	I1227 20:51:21.729839  484456 start.go:256] writing updated cluster config ...
	I1227 20:51:21.730174  484456 ssh_runner.go:195] Run: rm -f paused
	I1227 20:51:21.733645  484456 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:51:21.738005  484456 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-gpcrh" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 20:51:23.748804  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:26.243673  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:28.743819  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:30.744141  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:32.753955  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:35.244452  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:37.245909  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:39.743692  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:41.746642  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:44.243450  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:46.244671  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:48.245308  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:50.755633  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:53.268770  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:55.754680  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	W1227 20:51:58.243430  484456 pod_ready.go:104] pod "coredns-5dd5756b68-gpcrh" is not "Ready", error: <nil>
	I1227 20:51:59.244670  484456 pod_ready.go:94] pod "coredns-5dd5756b68-gpcrh" is "Ready"
	I1227 20:51:59.244701  484456 pod_ready.go:86] duration metric: took 37.506670083s for pod "coredns-5dd5756b68-gpcrh" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:51:59.248675  484456 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:51:59.253982  484456 pod_ready.go:94] pod "etcd-old-k8s-version-855707" is "Ready"
	I1227 20:51:59.254008  484456 pod_ready.go:86] duration metric: took 5.308983ms for pod "etcd-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:51:59.257023  484456 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:51:59.261758  484456 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-855707" is "Ready"
	I1227 20:51:59.261783  484456 pod_ready.go:86] duration metric: took 4.735691ms for pod "kube-apiserver-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:51:59.264672  484456 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:51:59.447152  484456 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-855707" is "Ready"
	I1227 20:51:59.447182  484456 pod_ready.go:86] duration metric: took 182.483797ms for pod "kube-controller-manager-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:51:59.643269  484456 pod_ready.go:83] waiting for pod "kube-proxy-57s5h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:52:00.042958  484456 pod_ready.go:94] pod "kube-proxy-57s5h" is "Ready"
	I1227 20:52:00.042985  484456 pod_ready.go:86] duration metric: took 399.68686ms for pod "kube-proxy-57s5h" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:52:00.253818  484456 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:52:00.642359  484456 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-855707" is "Ready"
	I1227 20:52:00.642390  484456 pod_ready.go:86] duration metric: took 388.542447ms for pod "kube-scheduler-old-k8s-version-855707" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:52:00.642404  484456 pod_ready.go:40] duration metric: took 38.90872564s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:52:00.693354  484456 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1227 20:52:00.696443  484456 out.go:203] 
	W1227 20:52:00.699282  484456 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1227 20:52:00.702179  484456 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:52:00.705033  484456 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-855707" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:51:56 old-k8s-version-855707 crio[653]: time="2025-12-27T20:51:56.0274854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:51:56 old-k8s-version-855707 crio[653]: time="2025-12-27T20:51:56.03488256Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:51:56 old-k8s-version-855707 crio[653]: time="2025-12-27T20:51:56.035492126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:51:56 old-k8s-version-855707 crio[653]: time="2025-12-27T20:51:56.05096892Z" level=info msg="Created container 9f13af8e3cf48801f0478b82b99ee66bcfc73622a69c8aac7a5a9bc87d6b8dad: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fgk6l/dashboard-metrics-scraper" id=f37e8f77-57d7-4bc8-adfb-f0d4beb3a94a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:51:56 old-k8s-version-855707 crio[653]: time="2025-12-27T20:51:56.052093122Z" level=info msg="Starting container: 9f13af8e3cf48801f0478b82b99ee66bcfc73622a69c8aac7a5a9bc87d6b8dad" id=7ea75a9f-10ae-49aa-9c9f-c42fd38d9ebe name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:51:56 old-k8s-version-855707 crio[653]: time="2025-12-27T20:51:56.054729796Z" level=info msg="Started container" PID=1646 containerID=9f13af8e3cf48801f0478b82b99ee66bcfc73622a69c8aac7a5a9bc87d6b8dad description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fgk6l/dashboard-metrics-scraper id=7ea75a9f-10ae-49aa-9c9f-c42fd38d9ebe name=/runtime.v1.RuntimeService/StartContainer sandboxID=f3fab93b2872fb46b81689be2167c5e13b65bd16dafe4483e28d388d9382d93c
	Dec 27 20:51:56 old-k8s-version-855707 conmon[1644]: conmon 9f13af8e3cf48801f047 <ninfo>: container 1646 exited with status 1
	Dec 27 20:51:56 old-k8s-version-855707 crio[653]: time="2025-12-27T20:51:56.239685149Z" level=info msg="Removing container: f34d240c73df632f12ba0708f523cf3e099946def9fe310317b9fe10cf92238c" id=9d64f2fe-6b22-4920-82c9-c794da26f941 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:51:56 old-k8s-version-855707 crio[653]: time="2025-12-27T20:51:56.254893816Z" level=info msg="Error loading conmon cgroup of container f34d240c73df632f12ba0708f523cf3e099946def9fe310317b9fe10cf92238c: cgroup deleted" id=9d64f2fe-6b22-4920-82c9-c794da26f941 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:51:56 old-k8s-version-855707 crio[653]: time="2025-12-27T20:51:56.260840819Z" level=info msg="Removed container f34d240c73df632f12ba0708f523cf3e099946def9fe310317b9fe10cf92238c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fgk6l/dashboard-metrics-scraper" id=9d64f2fe-6b22-4920-82c9-c794da26f941 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.444184883Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.44817511Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.448208718Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.448235531Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.451525093Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.451566782Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.451586925Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.456163958Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.456345901Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.456444787Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.459686867Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.459845451Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.459918253Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.463188313Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:52:02 old-k8s-version-855707 crio[653]: time="2025-12-27T20:52:02.463335238Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	9f13af8e3cf48       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   f3fab93b2872f       dashboard-metrics-scraper-5f989dc9cf-fgk6l       kubernetes-dashboard
	c896e123ec622       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   b6b61e424fca3       storage-provisioner                              kube-system
	e9fac01bcc780       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   36 seconds ago       Running             kubernetes-dashboard        0                   a9ccbd6764cd9       kubernetes-dashboard-8694d4445c-77hv8            kubernetes-dashboard
	472e9ef0638e8       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           55 seconds ago       Running             coredns                     1                   51e6ed3efc2b3       coredns-5dd5756b68-gpcrh                         kube-system
	990f851e3eb32       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   b6b61e424fca3       storage-provisioner                              kube-system
	72119f349b521       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   d4b1606caccc4       busybox                                          default
	b609c90a03360       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           55 seconds ago       Running             kindnet-cni                 1                   c9e86cefd5e15       kindnet-v9n7l                                    kube-system
	045ae702e454a       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           55 seconds ago       Running             kube-proxy                  1                   3d14930489ec2       kube-proxy-57s5h                                 kube-system
	7d4c3f3f4c978       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   900209e44d0f0       kube-scheduler-old-k8s-version-855707            kube-system
	dbb47e0c12746       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   f76f7c54abedf       kube-controller-manager-old-k8s-version-855707   kube-system
	1aee215b0f372       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   c2b87e582a2d1       kube-apiserver-old-k8s-version-855707            kube-system
	f81dcf2a16152       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   12468d7a4568b       etcd-old-k8s-version-855707                      kube-system
	
	
	==> coredns [472e9ef0638e8f491ece4877ddb32d0f7b578b7729e6009843e0f87891b75f03] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36192 - 36987 "HINFO IN 99020971225076295.2904537491910346420. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.010850843s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-855707
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-855707
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=old-k8s-version-855707
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_50_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:50:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-855707
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:52:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:51:50 +0000   Sat, 27 Dec 2025 20:50:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:51:50 +0000   Sat, 27 Dec 2025 20:50:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:51:50 +0000   Sat, 27 Dec 2025 20:50:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:51:50 +0000   Sat, 27 Dec 2025 20:50:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-855707
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                6f46b687-0cf7-4b64-a058-88b55b2f77d5
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-gpcrh                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     109s
	  kube-system                 etcd-old-k8s-version-855707                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m2s
	  kube-system                 kindnet-v9n7l                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-old-k8s-version-855707             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-old-k8s-version-855707    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-57s5h                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-old-k8s-version-855707             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-fgk6l        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-77hv8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 108s                   kube-proxy       
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-855707 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-855707 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-855707 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m2s                   kubelet          Node old-k8s-version-855707 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m2s                   kubelet          Node old-k8s-version-855707 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s                   kubelet          Node old-k8s-version-855707 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m2s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s                   node-controller  Node old-k8s-version-855707 event: Registered Node old-k8s-version-855707 in Controller
	  Normal  NodeReady                96s                    kubelet          Node old-k8s-version-855707 status is now: NodeReady
	  Normal  Starting                 63s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node old-k8s-version-855707 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node old-k8s-version-855707 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node old-k8s-version-855707 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                    node-controller  Node old-k8s-version-855707 event: Registered Node old-k8s-version-855707 in Controller
	
	
	==> dmesg <==
	[Dec27 20:17] overlayfs: idmapped layers are currently not supported
	[Dec27 20:19] overlayfs: idmapped layers are currently not supported
	[ +36.244108] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 20:22] overlayfs: idmapped layers are currently not supported
	[Dec27 20:23] overlayfs: idmapped layers are currently not supported
	[Dec27 20:24] overlayfs: idmapped layers are currently not supported
	[Dec27 20:25] overlayfs: idmapped layers are currently not supported
	[ +35.447549] overlayfs: idmapped layers are currently not supported
	[Dec27 20:26] overlayfs: idmapped layers are currently not supported
	[Dec27 20:27] overlayfs: idmapped layers are currently not supported
	[  +6.770645] overlayfs: idmapped layers are currently not supported
	[Dec27 20:28] overlayfs: idmapped layers are currently not supported
	[ +25.872751] overlayfs: idmapped layers are currently not supported
	[Dec27 20:29] overlayfs: idmapped layers are currently not supported
	[ +32.997137] overlayfs: idmapped layers are currently not supported
	[Dec27 20:31] overlayfs: idmapped layers are currently not supported
	[Dec27 20:33] overlayfs: idmapped layers are currently not supported
	[ +33.772475] overlayfs: idmapped layers are currently not supported
	[Dec27 20:39] overlayfs: idmapped layers are currently not supported
	[Dec27 20:40] overlayfs: idmapped layers are currently not supported
	[Dec27 20:44] overlayfs: idmapped layers are currently not supported
	[Dec27 20:45] overlayfs: idmapped layers are currently not supported
	[Dec27 20:49] overlayfs: idmapped layers are currently not supported
	[Dec27 20:50] overlayfs: idmapped layers are currently not supported
	[Dec27 20:51] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f81dcf2a161529d4fcaa1a2ecd3b730d50a5f75c24fdaf286d568185a6ad7aad] <==
	{"level":"info","ts":"2025-12-27T20:51:15.808368Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:51:15.808377Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:51:15.808567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T20:51:15.808618Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-27T20:51:15.808696Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T20:51:15.808721Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T20:51:15.81616Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T20:51:15.816925Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T20:51:15.817062Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T20:51:15.816583Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T20:51:15.817166Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T20:51:17.393492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:51:17.393619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:51:17.393672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:51:17.393714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:51:17.393745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:51:17.393785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:51:17.393819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:51:17.395341Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-855707 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:51:17.395422Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:51:17.396562Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T20:51:17.395469Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:51:17.402399Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:51:17.425405Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:51:17.425536Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:52:17 up  2:34,  0 user,  load average: 1.45, 1.66, 1.84
	Linux old-k8s-version-855707 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b609c90a033605921a0c78fea186518f1183108bfde56e07abdd5ad07826877b] <==
	I1227 20:51:22.247254       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:51:22.250449       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 20:51:22.250695       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:51:22.251356       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:51:22.251427       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:51:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:51:22.442076       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:51:22.442780       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:51:22.442837       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:51:22.442961       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 20:51:52.442466       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 20:51:52.442506       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 20:51:52.442619       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 20:51:52.443819       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1227 20:51:54.043310       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:51:54.043347       1 metrics.go:72] Registering metrics
	I1227 20:51:54.043397       1 controller.go:711] "Syncing nftables rules"
	I1227 20:52:02.443811       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:52:02.443849       1 main.go:301] handling current node
	I1227 20:52:12.449581       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:52:12.449619       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1aee215b0f3720a59c0901866e7a32993b55fc4f9c1a946cd923bcfd33eef1dd] <==
	I1227 20:51:20.138535       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:51:20.143159       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1227 20:51:20.148623       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1227 20:51:20.148724       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1227 20:51:20.150254       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1227 20:51:20.151084       1 shared_informer.go:318] Caches are synced for configmaps
	I1227 20:51:20.151690       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1227 20:51:20.151814       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:51:20.172232       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1227 20:51:20.172395       1 aggregator.go:166] initial CRD sync complete...
	I1227 20:51:20.172430       1 autoregister_controller.go:141] Starting autoregister controller
	I1227 20:51:20.172460       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:51:20.172492       1 cache.go:39] Caches are synced for autoregister controller
	E1227 20:51:20.215581       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 20:51:20.756630       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1227 20:51:21.512117       1 controller.go:624] quota admission added evaluator for: namespaces
	I1227 20:51:21.555066       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1227 20:51:21.584303       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:51:21.595554       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:51:21.608139       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1227 20:51:21.661292       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.29.180"}
	I1227 20:51:21.681998       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.77.80"}
	I1227 20:51:32.844061       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:51:32.852252       1 controller.go:624] quota admission added evaluator for: endpoints
	I1227 20:51:32.897211       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [dbb47e0c12746be14a55cae562bfb8e8d54317017f8f82d613e967bb89746d7e] <==
	I1227 20:51:32.930734       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-fgk6l"
	I1227 20:51:32.953957       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.701575ms"
	I1227 20:51:32.954305       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="44.267961ms"
	I1227 20:51:32.957684       1 shared_informer.go:318] Caches are synced for resource quota
	I1227 20:51:32.957782       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1227 20:51:32.971883       1 shared_informer.go:318] Caches are synced for daemon sets
	I1227 20:51:32.984467       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="30.417082ms"
	I1227 20:51:32.989890       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="35.543219ms"
	I1227 20:51:32.993765       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="55.252µs"
	I1227 20:51:33.002658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="18.083732ms"
	I1227 20:51:33.002768       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.145µs"
	I1227 20:51:33.015346       1 shared_informer.go:318] Caches are synced for crt configmap
	I1227 20:51:33.050972       1 shared_informer.go:318] Caches are synced for resource quota
	I1227 20:51:33.373105       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 20:51:33.373158       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1227 20:51:33.407355       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 20:51:37.192056       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="41.55µs"
	I1227 20:51:38.201958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.724µs"
	I1227 20:51:39.204816       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.291µs"
	I1227 20:51:42.241824       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="21.250713ms"
	I1227 20:51:42.242041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="79.563µs"
	I1227 20:51:56.258308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65µs"
	I1227 20:51:58.849899       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.570881ms"
	I1227 20:51:58.850059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.429µs"
	I1227 20:52:03.267391       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.821µs"
	
	
	==> kube-proxy [045ae702e454a1e50b74425e5caee0ffdfc04f73a5b99e25e3bcab98cb86fbb5] <==
	I1227 20:51:22.276907       1 server_others.go:69] "Using iptables proxy"
	I1227 20:51:22.292007       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1227 20:51:22.316299       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:51:22.318456       1 server_others.go:152] "Using iptables Proxier"
	I1227 20:51:22.318505       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1227 20:51:22.318514       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1227 20:51:22.318539       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1227 20:51:22.318770       1 server.go:846] "Version info" version="v1.28.0"
	I1227 20:51:22.318787       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:51:22.320003       1 config.go:188] "Starting service config controller"
	I1227 20:51:22.321036       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1227 20:51:22.321085       1 config.go:97] "Starting endpoint slice config controller"
	I1227 20:51:22.321101       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1227 20:51:22.323371       1 config.go:315] "Starting node config controller"
	I1227 20:51:22.327202       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1227 20:51:22.327942       1 shared_informer.go:318] Caches are synced for node config
	I1227 20:51:22.421379       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1227 20:51:22.421386       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [7d4c3f3f4c978744c9c5787250f663c59c07f863a1314cc8b1c62aeb93bd69f7] <==
	W1227 20:51:20.062247       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1227 20:51:20.062263       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1227 20:51:20.062318       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1227 20:51:20.062332       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1227 20:51:20.062380       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1227 20:51:20.062395       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1227 20:51:20.062449       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1227 20:51:20.062463       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1227 20:51:20.062514       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1227 20:51:20.062529       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1227 20:51:20.062569       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1227 20:51:20.062585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1227 20:51:20.062622       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1227 20:51:20.062637       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1227 20:51:20.062673       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1227 20:51:20.062690       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1227 20:51:20.062740       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1227 20:51:20.062756       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1227 20:51:20.062798       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1227 20:51:20.062813       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1227 20:51:20.062850       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1227 20:51:20.062866       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1227 20:51:20.090892       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1227 20:51:20.090933       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1227 20:51:21.705795       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 27 20:51:33 old-k8s-version-855707 kubelet[786]: I1227 20:51:33.073008     786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnmwc\" (UniqueName: \"kubernetes.io/projected/decbcdbb-b3dc-4d09-acc2-6cd6d5cda634-kube-api-access-wnmwc\") pod \"dashboard-metrics-scraper-5f989dc9cf-fgk6l\" (UID: \"decbcdbb-b3dc-4d09-acc2-6cd6d5cda634\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fgk6l"
	Dec 27 20:51:33 old-k8s-version-855707 kubelet[786]: I1227 20:51:33.073044     786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a193555c-c195-4ba4-8eb9-c4c8e4a915df-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-77hv8\" (UID: \"a193555c-c195-4ba4-8eb9-c4c8e4a915df\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-77hv8"
	Dec 27 20:51:33 old-k8s-version-855707 kubelet[786]: I1227 20:51:33.073074     786 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/decbcdbb-b3dc-4d09-acc2-6cd6d5cda634-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-fgk6l\" (UID: \"decbcdbb-b3dc-4d09-acc2-6cd6d5cda634\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fgk6l"
	Dec 27 20:51:33 old-k8s-version-855707 kubelet[786]: W1227 20:51:33.279580     786 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2/crio-f3fab93b2872fb46b81689be2167c5e13b65bd16dafe4483e28d388d9382d93c WatchSource:0}: Error finding container f3fab93b2872fb46b81689be2167c5e13b65bd16dafe4483e28d388d9382d93c: Status 404 returned error can't find the container with id f3fab93b2872fb46b81689be2167c5e13b65bd16dafe4483e28d388d9382d93c
	Dec 27 20:51:33 old-k8s-version-855707 kubelet[786]: W1227 20:51:33.292655     786 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ffdc66f60c1f6251fc1f908c71d4758f385e5697cb39837a5ad2fa6d525c21f2/crio-a9ccbd6764cd988eb181081d292ea75db3184866fd11c40c5badab32e483af9c WatchSource:0}: Error finding container a9ccbd6764cd988eb181081d292ea75db3184866fd11c40c5badab32e483af9c: Status 404 returned error can't find the container with id a9ccbd6764cd988eb181081d292ea75db3184866fd11c40c5badab32e483af9c
	Dec 27 20:51:37 old-k8s-version-855707 kubelet[786]: I1227 20:51:37.178279     786 scope.go:117] "RemoveContainer" containerID="486e38fd6e5049de2dda88072d6c48970e66be06078604221d640ef4c5e70476"
	Dec 27 20:51:38 old-k8s-version-855707 kubelet[786]: I1227 20:51:38.184659     786 scope.go:117] "RemoveContainer" containerID="486e38fd6e5049de2dda88072d6c48970e66be06078604221d640ef4c5e70476"
	Dec 27 20:51:38 old-k8s-version-855707 kubelet[786]: I1227 20:51:38.184969     786 scope.go:117] "RemoveContainer" containerID="f34d240c73df632f12ba0708f523cf3e099946def9fe310317b9fe10cf92238c"
	Dec 27 20:51:38 old-k8s-version-855707 kubelet[786]: E1227 20:51:38.185227     786 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fgk6l_kubernetes-dashboard(decbcdbb-b3dc-4d09-acc2-6cd6d5cda634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fgk6l" podUID="decbcdbb-b3dc-4d09-acc2-6cd6d5cda634"
	Dec 27 20:51:39 old-k8s-version-855707 kubelet[786]: I1227 20:51:39.189388     786 scope.go:117] "RemoveContainer" containerID="f34d240c73df632f12ba0708f523cf3e099946def9fe310317b9fe10cf92238c"
	Dec 27 20:51:39 old-k8s-version-855707 kubelet[786]: E1227 20:51:39.190176     786 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fgk6l_kubernetes-dashboard(decbcdbb-b3dc-4d09-acc2-6cd6d5cda634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fgk6l" podUID="decbcdbb-b3dc-4d09-acc2-6cd6d5cda634"
	Dec 27 20:51:43 old-k8s-version-855707 kubelet[786]: I1227 20:51:43.250947     786 scope.go:117] "RemoveContainer" containerID="f34d240c73df632f12ba0708f523cf3e099946def9fe310317b9fe10cf92238c"
	Dec 27 20:51:43 old-k8s-version-855707 kubelet[786]: E1227 20:51:43.251286     786 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fgk6l_kubernetes-dashboard(decbcdbb-b3dc-4d09-acc2-6cd6d5cda634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fgk6l" podUID="decbcdbb-b3dc-4d09-acc2-6cd6d5cda634"
	Dec 27 20:51:53 old-k8s-version-855707 kubelet[786]: I1227 20:51:53.226744     786 scope.go:117] "RemoveContainer" containerID="990f851e3eb3233bbb21418beecefa82d2748a136f67502e18f0e49d805ab852"
	Dec 27 20:51:53 old-k8s-version-855707 kubelet[786]: I1227 20:51:53.270007     786 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-77hv8" podStartSLOduration=13.288276278 podCreationTimestamp="2025-12-27 20:51:32 +0000 UTC" firstStartedPulling="2025-12-27 20:51:33.299155605 +0000 UTC m=+18.411611516" lastFinishedPulling="2025-12-27 20:51:41.280796317 +0000 UTC m=+26.393252236" observedRunningTime="2025-12-27 20:51:42.221141032 +0000 UTC m=+27.333596951" watchObservedRunningTime="2025-12-27 20:51:53.269916998 +0000 UTC m=+38.382372909"
	Dec 27 20:51:56 old-k8s-version-855707 kubelet[786]: I1227 20:51:56.023874     786 scope.go:117] "RemoveContainer" containerID="f34d240c73df632f12ba0708f523cf3e099946def9fe310317b9fe10cf92238c"
	Dec 27 20:51:56 old-k8s-version-855707 kubelet[786]: I1227 20:51:56.237780     786 scope.go:117] "RemoveContainer" containerID="f34d240c73df632f12ba0708f523cf3e099946def9fe310317b9fe10cf92238c"
	Dec 27 20:51:56 old-k8s-version-855707 kubelet[786]: I1227 20:51:56.238003     786 scope.go:117] "RemoveContainer" containerID="9f13af8e3cf48801f0478b82b99ee66bcfc73622a69c8aac7a5a9bc87d6b8dad"
	Dec 27 20:51:56 old-k8s-version-855707 kubelet[786]: E1227 20:51:56.238280     786 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fgk6l_kubernetes-dashboard(decbcdbb-b3dc-4d09-acc2-6cd6d5cda634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fgk6l" podUID="decbcdbb-b3dc-4d09-acc2-6cd6d5cda634"
	Dec 27 20:52:03 old-k8s-version-855707 kubelet[786]: I1227 20:52:03.251356     786 scope.go:117] "RemoveContainer" containerID="9f13af8e3cf48801f0478b82b99ee66bcfc73622a69c8aac7a5a9bc87d6b8dad"
	Dec 27 20:52:03 old-k8s-version-855707 kubelet[786]: E1227 20:52:03.252227     786 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fgk6l_kubernetes-dashboard(decbcdbb-b3dc-4d09-acc2-6cd6d5cda634)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fgk6l" podUID="decbcdbb-b3dc-4d09-acc2-6cd6d5cda634"
	Dec 27 20:52:12 old-k8s-version-855707 kubelet[786]: I1227 20:52:12.944697     786 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 27 20:52:12 old-k8s-version-855707 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:52:12 old-k8s-version-855707 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:52:12 old-k8s-version-855707 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e9fac01bcc780f0a4cbad6be58cc6198eff0b31d06a677ed0563d738670e0d3f] <==
	2025/12/27 20:51:41 Using namespace: kubernetes-dashboard
	2025/12/27 20:51:41 Using in-cluster config to connect to apiserver
	2025/12/27 20:51:41 Using secret token for csrf signing
	2025/12/27 20:51:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 20:51:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 20:51:41 Successful initial request to the apiserver, version: v1.28.0
	2025/12/27 20:51:41 Generating JWE encryption key
	2025/12/27 20:51:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 20:51:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 20:51:42 Initializing JWE encryption key from synchronized object
	2025/12/27 20:51:42 Creating in-cluster Sidecar client
	2025/12/27 20:51:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:51:42 Serving insecurely on HTTP port: 9090
	2025/12/27 20:52:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:51:41 Starting overwatch
	
	
	==> storage-provisioner [990f851e3eb3233bbb21418beecefa82d2748a136f67502e18f0e49d805ab852] <==
	I1227 20:51:22.256338       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 20:51:52.258614       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c896e123ec6228b1d4e8e675ab96d63527b576dc221b62eaba55cf1b039bbaab] <==
	I1227 20:51:53.279189       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 20:51:53.296103       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:51:53.296207       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1227 20:52:10.693187       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:52:10.693514       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-855707_b04fae04-420c-43de-bf55-797a5381f59e!
	I1227 20:52:10.694171       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c3590326-14f9-4148-8efd-a85b09a3c11f", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-855707_b04fae04-420c-43de-bf55-797a5381f59e became leader
	I1227 20:52:10.794293       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-855707_b04fae04-420c-43de-bf55-797a5381f59e!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-855707 -n old-k8s-version-855707
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-855707 -n old-k8s-version-855707: exit status 2 (341.694776ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-855707 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-058924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-058924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (239.935093ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:53:16Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-058924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-058924 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-058924 describe deploy/metrics-server -n kube-system: exit status 1 (84.568256ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-058924 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-058924
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-058924:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436",
	        "Created": "2025-12-27T20:52:26.32228828Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 488975,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:52:26.383176471Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436/hostname",
	        "HostsPath": "/var/lib/docker/containers/14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436/hosts",
	        "LogPath": "/var/lib/docker/containers/14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436/14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436-json.log",
	        "Name": "/default-k8s-diff-port-058924",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-058924:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-058924",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436",
	                "LowerDir": "/var/lib/docker/overlay2/1705b32d6f7b3acd21037f84bc864fcd3368266ae22d9d1ff6c6114e626d27cd-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1705b32d6f7b3acd21037f84bc864fcd3368266ae22d9d1ff6c6114e626d27cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1705b32d6f7b3acd21037f84bc864fcd3368266ae22d9d1ff6c6114e626d27cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1705b32d6f7b3acd21037f84bc864fcd3368266ae22d9d1ff6c6114e626d27cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-058924",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-058924/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-058924",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-058924",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-058924",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "afcab9387a70a2a569d539458bc07bd0e8b9dd0849cfc5d0331ce4a7c3fa32ea",
	            "SandboxKey": "/var/run/docker/netns/afcab9387a70",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-058924": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:ed:ab:91:17:fd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4cf559b41345f8593676aae308d8407a6052ba110f51cbc56967a3187eac038b",
	                    "EndpointID": "14984acd2d319789594720dd20e25bc54ee07a0fbd80a97c53f8d84efec2e771",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-058924",
	                        "14a8831f1ae2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-058924 -n default-k8s-diff-port-058924
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-058924 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-058924 logs -n 25: (1.170165137s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-037975 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-037975                │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-037975                │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-037975                │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-037975 sudo crio config                                                                                                                                                                                                             │ cilium-037975                │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │                     │
	│ delete  │ -p cilium-037975                                                                                                                                                                                                                              │ cilium-037975                │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │ 27 Dec 25 20:45 UTC │
	│ start   │ -p cert-expiration-629954 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-629954       │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │ 27 Dec 25 20:45 UTC │
	│ start   │ -p cert-expiration-629954 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-629954       │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │ 27 Dec 25 20:48 UTC │
	│ delete  │ -p cert-expiration-629954                                                                                                                                                                                                                     │ cert-expiration-629954       │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │ 27 Dec 25 20:48 UTC │
	│ start   │ -p force-systemd-flag-604544 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-604544    │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │                     │
	│ delete  │ -p force-systemd-env-859716                                                                                                                                                                                                                   │ force-systemd-env-859716     │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ start   │ -p cert-options-765175 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-765175          │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ ssh     │ cert-options-765175 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-765175          │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ ssh     │ -p cert-options-765175 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-765175          │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ delete  │ -p cert-options-765175                                                                                                                                                                                                                        │ cert-options-765175          │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ start   │ -p old-k8s-version-855707 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-855707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │                     │
	│ stop    │ -p old-k8s-version-855707 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │ 27 Dec 25 20:51 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-855707 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:51 UTC │ 27 Dec 25 20:51 UTC │
	│ start   │ -p old-k8s-version-855707 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:51 UTC │ 27 Dec 25 20:52 UTC │
	│ image   │ old-k8s-version-855707 image list --format=json                                                                                                                                                                                               │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ pause   │ -p old-k8s-version-855707 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │                     │
	│ delete  │ -p old-k8s-version-855707                                                                                                                                                                                                                     │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ delete  │ -p old-k8s-version-855707                                                                                                                                                                                                                     │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ start   │ -p default-k8s-diff-port-058924 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:53 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-058924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:52:21
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:52:21.421947  488547 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:52:21.422161  488547 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:52:21.422172  488547 out.go:374] Setting ErrFile to fd 2...
	I1227 20:52:21.422176  488547 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:52:21.422528  488547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:52:21.423045  488547 out.go:368] Setting JSON to false
	I1227 20:52:21.424137  488547 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9294,"bootTime":1766859448,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:52:21.424212  488547 start.go:143] virtualization:  
	I1227 20:52:21.428437  488547 out.go:179] * [default-k8s-diff-port-058924] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:52:21.432986  488547 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:52:21.433098  488547 notify.go:221] Checking for updates...
	I1227 20:52:21.439692  488547 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:52:21.442910  488547 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:52:21.446000  488547 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:52:21.449050  488547 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:52:21.452155  488547 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:52:21.455832  488547 config.go:182] Loaded profile config "force-systemd-flag-604544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:52:21.455949  488547 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:52:21.501556  488547 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:52:21.501665  488547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:52:21.576252  488547 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:52:21.567287213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:52:21.576367  488547 docker.go:319] overlay module found
	I1227 20:52:21.579673  488547 out.go:179] * Using the docker driver based on user configuration
	I1227 20:52:21.582672  488547 start.go:309] selected driver: docker
	I1227 20:52:21.582690  488547 start.go:928] validating driver "docker" against <nil>
	I1227 20:52:21.582705  488547 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:52:21.583409  488547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:52:21.638627  488547 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:52:21.630271006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:52:21.638775  488547 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 20:52:21.639001  488547 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:52:21.641917  488547 out.go:179] * Using Docker driver with root privileges
	I1227 20:52:21.644719  488547 cni.go:84] Creating CNI manager for ""
	I1227 20:52:21.644777  488547 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:52:21.644790  488547 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 20:52:21.644861  488547 start.go:353] cluster config:
	{Name:default-k8s-diff-port-058924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-058924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:52:21.647956  488547 out.go:179] * Starting "default-k8s-diff-port-058924" primary control-plane node in "default-k8s-diff-port-058924" cluster
	I1227 20:52:21.650708  488547 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:52:21.653592  488547 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:52:21.656549  488547 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:52:21.656595  488547 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:52:21.656610  488547 cache.go:65] Caching tarball of preloaded images
	I1227 20:52:21.656635  488547 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:52:21.656701  488547 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:52:21.656722  488547 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:52:21.656833  488547 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/config.json ...
	I1227 20:52:21.656849  488547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/config.json: {Name:mk1e938c57181d79ec6d2c5190c2c9a320091e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:52:21.675629  488547 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:52:21.675653  488547 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:52:21.675673  488547 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:52:21.675702  488547 start.go:360] acquireMachinesLock for default-k8s-diff-port-058924: {Name:mk1f359d7e6bf82a20b5c0ba5278536cffac40ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:52:21.675820  488547 start.go:364] duration metric: took 96.359µs to acquireMachinesLock for "default-k8s-diff-port-058924"
	I1227 20:52:21.675854  488547 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-058924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-058924 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:52:21.675925  488547 start.go:125] createHost starting for "" (driver="docker")
	I1227 20:52:21.679303  488547 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:52:21.679518  488547 start.go:159] libmachine.API.Create for "default-k8s-diff-port-058924" (driver="docker")
	I1227 20:52:21.679553  488547 client.go:173] LocalClient.Create starting
	I1227 20:52:21.679631  488547 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem
	I1227 20:52:21.679668  488547 main.go:144] libmachine: Decoding PEM data...
	I1227 20:52:21.679690  488547 main.go:144] libmachine: Parsing certificate...
	I1227 20:52:21.679739  488547 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem
	I1227 20:52:21.679766  488547 main.go:144] libmachine: Decoding PEM data...
	I1227 20:52:21.679781  488547 main.go:144] libmachine: Parsing certificate...
	I1227 20:52:21.680141  488547 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-058924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:52:21.695301  488547 cli_runner.go:211] docker network inspect default-k8s-diff-port-058924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:52:21.695386  488547 network_create.go:284] running [docker network inspect default-k8s-diff-port-058924] to gather additional debugging logs...
	I1227 20:52:21.695416  488547 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-058924
	W1227 20:52:21.710565  488547 cli_runner.go:211] docker network inspect default-k8s-diff-port-058924 returned with exit code 1
	I1227 20:52:21.710601  488547 network_create.go:287] error running [docker network inspect default-k8s-diff-port-058924]: docker network inspect default-k8s-diff-port-058924: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-058924 not found
	I1227 20:52:21.710613  488547 network_create.go:289] output of [docker network inspect default-k8s-diff-port-058924]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-058924 not found
	
	** /stderr **
	I1227 20:52:21.710718  488547 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:52:21.727037  488547 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9521cb9225c5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:1d:ef:38:b7:a6} reservation:<nil>}
	I1227 20:52:21.727486  488547 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-68d11cc2ab47 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:8d:ad:37:cb:fe} reservation:<nil>}
	I1227 20:52:21.727734  488547 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d3b7cfff4895 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:4a:e3:08:10:2f} reservation:<nil>}
	I1227 20:52:21.728168  488547 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019afe10}
	I1227 20:52:21.728189  488547 network_create.go:124] attempt to create docker network default-k8s-diff-port-058924 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 20:52:21.728241  488547 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-058924 default-k8s-diff-port-058924
	I1227 20:52:21.780139  488547 network_create.go:108] docker network default-k8s-diff-port-058924 192.168.76.0/24 created
	I1227 20:52:21.780177  488547 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-058924" container
	I1227 20:52:21.780260  488547 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:52:21.796682  488547 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-058924 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-058924 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:52:21.813498  488547 oci.go:103] Successfully created a docker volume default-k8s-diff-port-058924
	I1227 20:52:21.813594  488547 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-058924-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-058924 --entrypoint /usr/bin/test -v default-k8s-diff-port-058924:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:52:22.341694  488547 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-058924
	I1227 20:52:22.341759  488547 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:52:22.341774  488547 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 20:52:22.341851  488547 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-058924:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 20:52:26.254539  488547 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-058924:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.912650748s)
	I1227 20:52:26.254573  488547 kic.go:203] duration metric: took 3.912795654s to extract preloaded images to volume ...
	W1227 20:52:26.254719  488547 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 20:52:26.254856  488547 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:52:26.308031  488547 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-058924 --name default-k8s-diff-port-058924 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-058924 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-058924 --network default-k8s-diff-port-058924 --ip 192.168.76.2 --volume default-k8s-diff-port-058924:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:52:26.601337  488547 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-058924 --format={{.State.Running}}
	I1227 20:52:26.623330  488547 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-058924 --format={{.State.Status}}
	I1227 20:52:26.649416  488547 cli_runner.go:164] Run: docker exec default-k8s-diff-port-058924 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:52:26.702661  488547 oci.go:144] the created container "default-k8s-diff-port-058924" has a running status.
	I1227 20:52:26.702691  488547 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa...
	I1227 20:52:27.073883  488547 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:52:27.101597  488547 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-058924 --format={{.State.Status}}
	I1227 20:52:27.122598  488547 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:52:27.122639  488547 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-058924 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:52:27.194972  488547 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-058924 --format={{.State.Status}}
	I1227 20:52:27.229716  488547 machine.go:94] provisionDockerMachine start ...
	I1227 20:52:27.229807  488547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:52:27.252450  488547 main.go:144] libmachine: Using SSH client type: native
	I1227 20:52:27.252790  488547 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1227 20:52:27.252799  488547 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:52:27.253557  488547 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 20:52:30.392988  488547 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-058924
	
	I1227 20:52:30.393014  488547 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-058924"
	I1227 20:52:30.393078  488547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:52:30.410302  488547 main.go:144] libmachine: Using SSH client type: native
	I1227 20:52:30.410632  488547 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1227 20:52:30.410649  488547 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-058924 && echo "default-k8s-diff-port-058924" | sudo tee /etc/hostname
	I1227 20:52:30.568167  488547 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-058924
	
	I1227 20:52:30.568310  488547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:52:30.588698  488547 main.go:144] libmachine: Using SSH client type: native
	I1227 20:52:30.589016  488547 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1227 20:52:30.589033  488547 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-058924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-058924/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-058924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:52:30.725510  488547 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:52:30.725537  488547 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:52:30.725558  488547 ubuntu.go:190] setting up certificates
	I1227 20:52:30.725567  488547 provision.go:84] configureAuth start
	I1227 20:52:30.725626  488547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-058924
	I1227 20:52:30.747978  488547 provision.go:143] copyHostCerts
	I1227 20:52:30.748046  488547 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:52:30.748061  488547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:52:30.748135  488547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:52:30.748247  488547 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:52:30.748258  488547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:52:30.748290  488547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:52:30.748367  488547 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:52:30.748377  488547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:52:30.748401  488547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:52:30.748463  488547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-058924 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-058924 localhost minikube]
	I1227 20:52:31.160288  488547 provision.go:177] copyRemoteCerts
	I1227 20:52:31.160365  488547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:52:31.160409  488547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:52:31.177235  488547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:52:31.281127  488547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:52:31.297743  488547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1227 20:52:31.314185  488547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:52:31.330013  488547 provision.go:87] duration metric: took 604.42573ms to configureAuth
	I1227 20:52:31.330038  488547 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:52:31.330216  488547 config.go:182] Loaded profile config "default-k8s-diff-port-058924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:52:31.330326  488547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:52:31.347454  488547 main.go:144] libmachine: Using SSH client type: native
	I1227 20:52:31.347771  488547 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1227 20:52:31.347785  488547 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:52:31.633093  488547 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:52:31.633181  488547 machine.go:97] duration metric: took 4.403445765s to provisionDockerMachine
	I1227 20:52:31.633217  488547 client.go:176] duration metric: took 9.953652911s to LocalClient.Create
	I1227 20:52:31.633250  488547 start.go:167] duration metric: took 9.953732113s to libmachine.API.Create "default-k8s-diff-port-058924"
	I1227 20:52:31.633262  488547 start.go:293] postStartSetup for "default-k8s-diff-port-058924" (driver="docker")
	I1227 20:52:31.633272  488547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:52:31.633399  488547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:52:31.633477  488547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:52:31.650546  488547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:52:31.749265  488547 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:52:31.752624  488547 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:52:31.752655  488547 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:52:31.752668  488547 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:52:31.752722  488547 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:52:31.752804  488547 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:52:31.752908  488547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:52:31.760148  488547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:52:31.777748  488547 start.go:296] duration metric: took 144.469099ms for postStartSetup
	I1227 20:52:31.778457  488547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-058924
	I1227 20:52:31.796857  488547 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/config.json ...
	I1227 20:52:31.797156  488547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:52:31.797206  488547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:52:31.813931  488547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:52:31.910477  488547 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:52:31.914895  488547 start.go:128] duration metric: took 10.238955121s to createHost
	I1227 20:52:31.914922  488547 start.go:83] releasing machines lock for "default-k8s-diff-port-058924", held for 10.239089583s
	I1227 20:52:31.915016  488547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-058924
	I1227 20:52:31.931506  488547 ssh_runner.go:195] Run: cat /version.json
	I1227 20:52:31.931562  488547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:52:31.931842  488547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:52:31.931912  488547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:52:31.949243  488547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:52:31.957640  488547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:52:32.156866  488547 ssh_runner.go:195] Run: systemctl --version
	I1227 20:52:32.163070  488547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:52:32.197227  488547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:52:32.201484  488547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:52:32.201562  488547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:52:32.228439  488547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 20:52:32.228514  488547 start.go:496] detecting cgroup driver to use...
	I1227 20:52:32.228560  488547 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:52:32.228637  488547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:52:32.246285  488547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:52:32.258626  488547 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:52:32.258692  488547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:52:32.276805  488547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:52:32.295483  488547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:52:32.414079  488547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:52:32.534733  488547 docker.go:234] disabling docker service ...
	I1227 20:52:32.534810  488547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:52:32.556784  488547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:52:32.570483  488547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:52:32.686023  488547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:52:32.829652  488547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:52:32.842817  488547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:52:32.856028  488547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:52:32.856095  488547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:52:32.864508  488547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:52:32.864576  488547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:52:32.873200  488547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:52:32.882104  488547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:52:32.890988  488547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:52:32.899174  488547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:52:32.907547  488547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:52:32.919981  488547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:52:32.928286  488547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:52:32.935447  488547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:52:32.942498  488547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:52:33.055728  488547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:52:33.217778  488547 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:52:33.217908  488547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:52:33.222060  488547 start.go:574] Will wait 60s for crictl version
	I1227 20:52:33.222130  488547 ssh_runner.go:195] Run: which crictl
	I1227 20:52:33.225552  488547 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:52:33.250441  488547 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:52:33.250529  488547 ssh_runner.go:195] Run: crio --version
	I1227 20:52:33.279238  488547 ssh_runner.go:195] Run: crio --version
	I1227 20:52:33.318478  488547 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:52:33.321323  488547 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-058924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:52:33.340844  488547 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 20:52:33.344941  488547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:52:33.354883  488547 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-058924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-058924 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:52:33.355005  488547 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:52:33.355065  488547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:52:33.388477  488547 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:52:33.388506  488547 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:52:33.388564  488547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:52:33.416174  488547 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:52:33.416198  488547 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:52:33.416207  488547 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.35.0 crio true true} ...
	I1227 20:52:33.416294  488547 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-058924 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-058924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:52:33.416391  488547 ssh_runner.go:195] Run: crio config
	I1227 20:52:33.475737  488547 cni.go:84] Creating CNI manager for ""
	I1227 20:52:33.475763  488547 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:52:33.475785  488547 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:52:33.475810  488547 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-058924 NodeName:default-k8s-diff-port-058924 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:52:33.475953  488547 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-058924"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:52:33.476033  488547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:52:33.484315  488547 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:52:33.484387  488547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:52:33.492029  488547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1227 20:52:33.506917  488547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:52:33.520450  488547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1227 20:52:33.534514  488547 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:52:33.538058  488547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:52:33.547183  488547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:52:33.655312  488547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:52:33.670301  488547 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924 for IP: 192.168.76.2
	I1227 20:52:33.670321  488547 certs.go:195] generating shared ca certs ...
	I1227 20:52:33.670337  488547 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:52:33.670522  488547 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:52:33.670594  488547 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:52:33.670609  488547 certs.go:257] generating profile certs ...
	I1227 20:52:33.670690  488547 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.key
	I1227 20:52:33.670711  488547 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.crt with IP's: []
	I1227 20:52:33.925389  488547 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.crt ...
	I1227 20:52:33.925422  488547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.crt: {Name:mk51f3ce571eb4f9d681b52afcd44556cd483b48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:52:33.925689  488547 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.key ...
	I1227 20:52:33.925707  488547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.key: {Name:mke630eec69e75ce2e3ed143575def4c3f76a524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:52:33.925843  488547 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/apiserver.key.eada78d3
	I1227 20:52:33.925864  488547 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/apiserver.crt.eada78d3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 20:52:34.029692  488547 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/apiserver.crt.eada78d3 ...
	I1227 20:52:34.029728  488547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/apiserver.crt.eada78d3: {Name:mk108b051b2f5c7a03ca7d203fe842e750eb2f4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:52:34.029908  488547 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/apiserver.key.eada78d3 ...
	I1227 20:52:34.029923  488547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/apiserver.key.eada78d3: {Name:mk74bad9513fb383026c4f1933e2b6a6287a895e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:52:34.030010  488547 certs.go:382] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/apiserver.crt.eada78d3 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/apiserver.crt
	I1227 20:52:34.030088  488547 certs.go:386] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/apiserver.key.eada78d3 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/apiserver.key
	I1227 20:52:34.030158  488547 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/proxy-client.key
	I1227 20:52:34.030177  488547 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/proxy-client.crt with IP's: []
	I1227 20:52:34.199290  488547 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/proxy-client.crt ...
	I1227 20:52:34.199321  488547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/proxy-client.crt: {Name:mkd95df446b11a9589664a4ca7a5f272f6ed8853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:52:34.199492  488547 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/proxy-client.key ...
	I1227 20:52:34.199507  488547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/proxy-client.key: {Name:mke13f2ffad14f838c4ff83a771b83fd0a52b6e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:52:34.199694  488547 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:52:34.199738  488547 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:52:34.199756  488547 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:52:34.199785  488547 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:52:34.199813  488547 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:52:34.199841  488547 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:52:34.199892  488547 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:52:34.200502  488547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:52:34.225702  488547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:52:34.243253  488547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:52:34.262204  488547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:52:34.279411  488547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:52:34.298926  488547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:52:34.315861  488547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:52:34.332227  488547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:52:34.348873  488547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:52:34.366560  488547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:52:34.383510  488547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:52:34.400162  488547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:52:34.412522  488547 ssh_runner.go:195] Run: openssl version
	I1227 20:52:34.418769  488547 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:52:34.425740  488547 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:52:34.432783  488547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:52:34.436449  488547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:52:34.436521  488547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:52:34.480298  488547 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:52:34.488206  488547 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2743362.pem /etc/ssl/certs/3ec20f2e.0
	I1227 20:52:34.495718  488547 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:52:34.503442  488547 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:52:34.511859  488547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:52:34.515457  488547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:52:34.515523  488547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:52:34.556130  488547 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:52:34.563691  488547 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 20:52:34.571124  488547 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:52:34.578458  488547 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:52:34.586146  488547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:52:34.589940  488547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:52:34.590013  488547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:52:34.630956  488547 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:52:34.638268  488547 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/274336.pem /etc/ssl/certs/51391683.0
	I1227 20:52:34.645283  488547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:52:34.648893  488547 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:52:34.648960  488547 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-058924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-058924 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:52:34.649036  488547 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:52:34.649093  488547 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:52:34.681781  488547 cri.go:96] found id: ""
	I1227 20:52:34.681873  488547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:52:34.689549  488547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 20:52:34.697032  488547 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:52:34.697128  488547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:52:34.704711  488547 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:52:34.704741  488547 kubeadm.go:158] found existing configuration files:
	
	I1227 20:52:34.704792  488547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1227 20:52:34.712402  488547 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:52:34.712486  488547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:52:34.719700  488547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1227 20:52:34.727132  488547 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:52:34.727235  488547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:52:34.734325  488547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1227 20:52:34.741864  488547 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:52:34.741936  488547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:52:34.749421  488547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1227 20:52:34.757096  488547 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:52:34.757211  488547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:52:34.764486  488547 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:52:34.801416  488547 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:52:34.801668  488547 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:52:34.871322  488547 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:52:34.871433  488547 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 20:52:34.871502  488547 kubeadm.go:319] OS: Linux
	I1227 20:52:34.871577  488547 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:52:34.871655  488547 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 20:52:34.871744  488547 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:52:34.871817  488547 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:52:34.871899  488547 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:52:34.871975  488547 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:52:34.872042  488547 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:52:34.872131  488547 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:52:34.872211  488547 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 20:52:34.937642  488547 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:52:34.937758  488547 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:52:34.937855  488547 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:52:34.945589  488547 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 20:52:34.952072  488547 out.go:252]   - Generating certificates and keys ...
	I1227 20:52:34.952167  488547 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:52:34.952238  488547 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:52:35.080376  488547 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 20:52:35.148930  488547 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 20:52:35.397478  488547 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 20:52:36.125672  488547 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 20:52:36.753014  488547 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 20:52:36.753332  488547 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-058924 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 20:52:36.816655  488547 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 20:52:36.817020  488547 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-058924 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 20:52:37.225764  488547 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 20:52:37.578948  488547 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 20:52:37.734922  488547 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 20:52:37.735264  488547 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:52:37.823105  488547 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:52:37.890208  488547 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:52:38.504901  488547 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:52:38.728561  488547 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:52:39.092806  488547 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:52:39.093628  488547 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:52:39.096428  488547 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:52:39.099936  488547 out.go:252]   - Booting up control plane ...
	I1227 20:52:39.100052  488547 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:52:39.100136  488547 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:52:39.102173  488547 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:52:39.117763  488547 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:52:39.118094  488547 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:52:39.125578  488547 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:52:39.125959  488547 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:52:39.126009  488547 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:52:39.255793  488547 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:52:39.255920  488547 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 20:52:40.258475  488547 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001657179s
	I1227 20:52:40.260867  488547 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 20:52:40.260986  488547 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1227 20:52:40.261095  488547 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 20:52:40.261178  488547 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 20:52:41.784343  488547 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.522390948s
	I1227 20:52:44.080343  488547 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.819433948s
	I1227 20:52:45.762998  488547 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501419975s
	I1227 20:52:45.803652  488547 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 20:52:45.816181  488547 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 20:52:45.829972  488547 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 20:52:45.830212  488547 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-058924 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 20:52:45.843120  488547 kubeadm.go:319] [bootstrap-token] Using token: tb3kzy.w39znyhrfm1udbsf
	I1227 20:52:45.846305  488547 out.go:252]   - Configuring RBAC rules ...
	I1227 20:52:45.846431  488547 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 20:52:45.850204  488547 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 20:52:45.859974  488547 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 20:52:45.866836  488547 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 20:52:45.870794  488547 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 20:52:45.874824  488547 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 20:52:46.170287  488547 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 20:52:46.614699  488547 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 20:52:47.170397  488547 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 20:52:47.171637  488547 kubeadm.go:319] 
	I1227 20:52:47.171731  488547 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 20:52:47.171737  488547 kubeadm.go:319] 
	I1227 20:52:47.171822  488547 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 20:52:47.171830  488547 kubeadm.go:319] 
	I1227 20:52:47.171858  488547 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 20:52:47.171921  488547 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 20:52:47.171982  488547 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 20:52:47.171993  488547 kubeadm.go:319] 
	I1227 20:52:47.172062  488547 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 20:52:47.172066  488547 kubeadm.go:319] 
	I1227 20:52:47.172117  488547 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 20:52:47.172125  488547 kubeadm.go:319] 
	I1227 20:52:47.172180  488547 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 20:52:47.172256  488547 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 20:52:47.172324  488547 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 20:52:47.172328  488547 kubeadm.go:319] 
	I1227 20:52:47.172412  488547 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 20:52:47.172489  488547 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 20:52:47.172493  488547 kubeadm.go:319] 
	I1227 20:52:47.172587  488547 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token tb3kzy.w39znyhrfm1udbsf \
	I1227 20:52:47.172708  488547 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ff29328d1e0d612c7979c16c69d6042f5f31e931d111cc12c8320ed4e4ab5152 \
	I1227 20:52:47.172729  488547 kubeadm.go:319] 	--control-plane 
	I1227 20:52:47.172733  488547 kubeadm.go:319] 
	I1227 20:52:47.172831  488547 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 20:52:47.172835  488547 kubeadm.go:319] 
	I1227 20:52:47.172918  488547 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token tb3kzy.w39znyhrfm1udbsf \
	I1227 20:52:47.173039  488547 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ff29328d1e0d612c7979c16c69d6042f5f31e931d111cc12c8320ed4e4ab5152 
	I1227 20:52:47.176885  488547 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 20:52:47.177382  488547 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:52:47.177565  488547 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:52:47.177603  488547 cni.go:84] Creating CNI manager for ""
	I1227 20:52:47.177628  488547 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:52:47.180839  488547 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 20:52:47.183697  488547 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 20:52:47.188418  488547 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 20:52:47.188441  488547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 20:52:47.201785  488547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 20:52:47.497003  488547 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 20:52:47.497150  488547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:52:47.497251  488547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-058924 minikube.k8s.io/updated_at=2025_12_27T20_52_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562 minikube.k8s.io/name=default-k8s-diff-port-058924 minikube.k8s.io/primary=true
	I1227 20:52:47.627057  488547 ops.go:34] apiserver oom_adj: -16
	I1227 20:52:47.627211  488547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:52:48.127938  488547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:52:48.627820  488547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:52:49.127339  488547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:52:49.627765  488547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:52:50.127798  488547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:52:50.627357  488547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:52:51.127328  488547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:52:51.628007  488547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:52:51.763596  488547 kubeadm.go:1114] duration metric: took 4.266501371s to wait for elevateKubeSystemPrivileges
	I1227 20:52:51.763627  488547 kubeadm.go:403] duration metric: took 17.11467031s to StartCluster
	I1227 20:52:51.763643  488547 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:52:51.763704  488547 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:52:51.764281  488547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:52:51.764471  488547 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:52:51.764594  488547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 20:52:51.764836  488547 config.go:182] Loaded profile config "default-k8s-diff-port-058924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:52:51.764870  488547 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:52:51.764928  488547 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-058924"
	I1227 20:52:51.764942  488547 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-058924"
	I1227 20:52:51.764963  488547 host.go:66] Checking if "default-k8s-diff-port-058924" exists ...
	I1227 20:52:51.765464  488547 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-058924 --format={{.State.Status}}
	I1227 20:52:51.765802  488547 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-058924"
	I1227 20:52:51.765835  488547 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-058924"
	I1227 20:52:51.766121  488547 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-058924 --format={{.State.Status}}
	I1227 20:52:51.767961  488547 out.go:179] * Verifying Kubernetes components...
	I1227 20:52:51.770718  488547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:52:51.807738  488547 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:52:51.809216  488547 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-058924"
	I1227 20:52:51.809251  488547 host.go:66] Checking if "default-k8s-diff-port-058924" exists ...
	I1227 20:52:51.809736  488547 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-058924 --format={{.State.Status}}
	I1227 20:52:51.810910  488547 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:52:51.810943  488547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:52:51.811010  488547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:52:51.858439  488547 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:52:51.858461  488547 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:52:51.858554  488547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:52:51.860024  488547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:52:51.887882  488547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:52:52.096606  488547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 20:52:52.102804  488547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:52:52.141347  488547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:52:52.203864  488547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:52:52.813760  488547 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1227 20:52:52.815378  488547 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-058924" to be "Ready" ...
	I1227 20:52:53.212102  488547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.070717062s)
	I1227 20:52:53.212154  488547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.008267314s)
	I1227 20:52:53.237802  488547 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 20:52:53.240650  488547 addons.go:530] duration metric: took 1.475769985s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 20:52:53.318934  488547 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-058924" context rescaled to 1 replicas
	W1227 20:52:54.819009  488547 node_ready.go:57] node "default-k8s-diff-port-058924" has "Ready":"False" status (will retry)
	W1227 20:52:56.819110  488547 node_ready.go:57] node "default-k8s-diff-port-058924" has "Ready":"False" status (will retry)
	W1227 20:52:59.318635  488547 node_ready.go:57] node "default-k8s-diff-port-058924" has "Ready":"False" status (will retry)
	W1227 20:53:01.319879  488547 node_ready.go:57] node "default-k8s-diff-port-058924" has "Ready":"False" status (will retry)
	W1227 20:53:03.818150  488547 node_ready.go:57] node "default-k8s-diff-port-058924" has "Ready":"False" status (will retry)
	I1227 20:53:05.318854  488547 node_ready.go:49] node "default-k8s-diff-port-058924" is "Ready"
	I1227 20:53:05.318888  488547 node_ready.go:38] duration metric: took 12.503474773s for node "default-k8s-diff-port-058924" to be "Ready" ...
	I1227 20:53:05.318902  488547 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:53:05.318959  488547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:53:05.331547  488547 api_server.go:72] duration metric: took 13.567048596s to wait for apiserver process to appear ...
	I1227 20:53:05.331574  488547 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:53:05.331593  488547 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1227 20:53:05.340084  488547 api_server.go:325] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1227 20:53:05.341177  488547 api_server.go:141] control plane version: v1.35.0
	I1227 20:53:05.341215  488547 api_server.go:131] duration metric: took 9.63353ms to wait for apiserver health ...
	I1227 20:53:05.341224  488547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:53:05.344281  488547 system_pods.go:59] 8 kube-system pods found
	I1227 20:53:05.344318  488547 system_pods.go:61] "coredns-7d764666f9-7wf76" [14f9ecf5-c5b1-4458-bce4-18c5f12a447a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:53:05.344326  488547 system_pods.go:61] "etcd-default-k8s-diff-port-058924" [b6248c66-2ad3-43a3-ba5a-b5f3bd9219a4] Running
	I1227 20:53:05.344332  488547 system_pods.go:61] "kindnet-8clbx" [a53eca44-5c16-4e1c-b208-061922a489d6] Running
	I1227 20:53:05.344337  488547 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-058924" [2a305629-e147-40ce-9422-c1010a8bbbcb] Running
	I1227 20:53:05.344344  488547 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-058924" [b4da8504-1367-4584-8f91-0de40e6c3b81] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:53:05.344357  488547 system_pods.go:61] "kube-proxy-m2mtv" [b85165f6-d028-4fd5-92e8-e1b227aa2270] Running
	I1227 20:53:05.344363  488547 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-058924" [e1319201-ba51-43c7-b16d-efd9120f0e5a] Running
	I1227 20:53:05.344368  488547 system_pods.go:61] "storage-provisioner" [e205de3f-3506-425b-a039-4dfa897cf8f9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:53:05.344374  488547 system_pods.go:74] duration metric: took 3.143237ms to wait for pod list to return data ...
	I1227 20:53:05.344381  488547 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:53:05.346944  488547 default_sa.go:45] found service account: "default"
	I1227 20:53:05.346968  488547 default_sa.go:55] duration metric: took 2.581161ms for default service account to be created ...
	I1227 20:53:05.346978  488547 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:53:05.349835  488547 system_pods.go:86] 8 kube-system pods found
	I1227 20:53:05.349869  488547 system_pods.go:89] "coredns-7d764666f9-7wf76" [14f9ecf5-c5b1-4458-bce4-18c5f12a447a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:53:05.349877  488547 system_pods.go:89] "etcd-default-k8s-diff-port-058924" [b6248c66-2ad3-43a3-ba5a-b5f3bd9219a4] Running
	I1227 20:53:05.349884  488547 system_pods.go:89] "kindnet-8clbx" [a53eca44-5c16-4e1c-b208-061922a489d6] Running
	I1227 20:53:05.349889  488547 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-058924" [2a305629-e147-40ce-9422-c1010a8bbbcb] Running
	I1227 20:53:05.349897  488547 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-058924" [b4da8504-1367-4584-8f91-0de40e6c3b81] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:53:05.349907  488547 system_pods.go:89] "kube-proxy-m2mtv" [b85165f6-d028-4fd5-92e8-e1b227aa2270] Running
	I1227 20:53:05.349922  488547 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-058924" [e1319201-ba51-43c7-b16d-efd9120f0e5a] Running
	I1227 20:53:05.349930  488547 system_pods.go:89] "storage-provisioner" [e205de3f-3506-425b-a039-4dfa897cf8f9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:53:05.349958  488547 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1227 20:53:05.653797  488547 system_pods.go:86] 8 kube-system pods found
	I1227 20:53:05.653839  488547 system_pods.go:89] "coredns-7d764666f9-7wf76" [14f9ecf5-c5b1-4458-bce4-18c5f12a447a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:53:05.653851  488547 system_pods.go:89] "etcd-default-k8s-diff-port-058924" [b6248c66-2ad3-43a3-ba5a-b5f3bd9219a4] Running
	I1227 20:53:05.653860  488547 system_pods.go:89] "kindnet-8clbx" [a53eca44-5c16-4e1c-b208-061922a489d6] Running
	I1227 20:53:05.653865  488547 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-058924" [2a305629-e147-40ce-9422-c1010a8bbbcb] Running
	I1227 20:53:05.653887  488547 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-058924" [b4da8504-1367-4584-8f91-0de40e6c3b81] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:53:05.653901  488547 system_pods.go:89] "kube-proxy-m2mtv" [b85165f6-d028-4fd5-92e8-e1b227aa2270] Running
	I1227 20:53:05.653907  488547 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-058924" [e1319201-ba51-43c7-b16d-efd9120f0e5a] Running
	I1227 20:53:05.653917  488547 system_pods.go:89] "storage-provisioner" [e205de3f-3506-425b-a039-4dfa897cf8f9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:53:05.937041  488547 system_pods.go:86] 8 kube-system pods found
	I1227 20:53:05.937080  488547 system_pods.go:89] "coredns-7d764666f9-7wf76" [14f9ecf5-c5b1-4458-bce4-18c5f12a447a] Running
	I1227 20:53:05.937088  488547 system_pods.go:89] "etcd-default-k8s-diff-port-058924" [b6248c66-2ad3-43a3-ba5a-b5f3bd9219a4] Running
	I1227 20:53:05.937093  488547 system_pods.go:89] "kindnet-8clbx" [a53eca44-5c16-4e1c-b208-061922a489d6] Running
	I1227 20:53:05.937097  488547 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-058924" [2a305629-e147-40ce-9422-c1010a8bbbcb] Running
	I1227 20:53:05.937123  488547 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-058924" [b4da8504-1367-4584-8f91-0de40e6c3b81] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:53:05.937137  488547 system_pods.go:89] "kube-proxy-m2mtv" [b85165f6-d028-4fd5-92e8-e1b227aa2270] Running
	I1227 20:53:05.937143  488547 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-058924" [e1319201-ba51-43c7-b16d-efd9120f0e5a] Running
	I1227 20:53:05.937148  488547 system_pods.go:89] "storage-provisioner" [e205de3f-3506-425b-a039-4dfa897cf8f9] Running
	I1227 20:53:05.937157  488547 system_pods.go:126] duration metric: took 590.171942ms to wait for k8s-apps to be running ...
	I1227 20:53:05.937169  488547 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:53:05.937226  488547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:53:05.950173  488547 system_svc.go:56] duration metric: took 12.991235ms WaitForService to wait for kubelet
	I1227 20:53:05.950212  488547 kubeadm.go:587] duration metric: took 14.185717759s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:53:05.950240  488547 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:53:05.953097  488547 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:53:05.953131  488547 node_conditions.go:123] node cpu capacity is 2
	I1227 20:53:05.953146  488547 node_conditions.go:105] duration metric: took 2.900216ms to run NodePressure ...
	I1227 20:53:05.953160  488547 start.go:242] waiting for startup goroutines ...
	I1227 20:53:05.953167  488547 start.go:247] waiting for cluster config update ...
	I1227 20:53:05.953179  488547 start.go:256] writing updated cluster config ...
	I1227 20:53:05.953593  488547 ssh_runner.go:195] Run: rm -f paused
	I1227 20:53:05.957232  488547 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:53:05.960857  488547 pod_ready.go:83] waiting for pod "coredns-7d764666f9-7wf76" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:53:05.965735  488547 pod_ready.go:94] pod "coredns-7d764666f9-7wf76" is "Ready"
	I1227 20:53:05.965763  488547 pod_ready.go:86] duration metric: took 4.876209ms for pod "coredns-7d764666f9-7wf76" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:53:05.968532  488547 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:53:05.973531  488547 pod_ready.go:94] pod "etcd-default-k8s-diff-port-058924" is "Ready"
	I1227 20:53:05.973611  488547 pod_ready.go:86] duration metric: took 5.042055ms for pod "etcd-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:53:05.976001  488547 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:53:05.980773  488547 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-058924" is "Ready"
	I1227 20:53:05.980802  488547 pod_ready.go:86] duration metric: took 4.732755ms for pod "kube-apiserver-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:53:05.983186  488547 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:53:06.760993  488547 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-058924" is "Ready"
	I1227 20:53:06.761028  488547 pod_ready.go:86] duration metric: took 777.813019ms for pod "kube-controller-manager-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:53:06.961188  488547 pod_ready.go:83] waiting for pod "kube-proxy-m2mtv" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:53:07.361500  488547 pod_ready.go:94] pod "kube-proxy-m2mtv" is "Ready"
	I1227 20:53:07.361529  488547 pod_ready.go:86] duration metric: took 400.312803ms for pod "kube-proxy-m2mtv" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:53:07.562130  488547 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:53:07.961440  488547 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-058924" is "Ready"
	I1227 20:53:07.961498  488547 pod_ready.go:86] duration metric: took 399.337365ms for pod "kube-scheduler-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:53:07.961511  488547 pod_ready.go:40] duration metric: took 2.004246972s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:53:08.038869  488547 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 20:53:08.041959  488547 out.go:203] 
	W1227 20:53:08.044812  488547 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 20:53:08.047674  488547 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:53:08.051574  488547 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-058924" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:53:05 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:05.615605813Z" level=info msg="Created container 0e5ebdb6df37117e7338a63f5a8bb23accbffd42749e3c2cbeb3afdb983a2969: kube-system/coredns-7d764666f9-7wf76/coredns" id=50d498d5-f567-41f9-8732-91cbb192ae82 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:53:05 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:05.616311163Z" level=info msg="Starting container: 0e5ebdb6df37117e7338a63f5a8bb23accbffd42749e3c2cbeb3afdb983a2969" id=6bab3213-2862-426b-b0d6-3c377af5816e name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:53:05 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:05.623011296Z" level=info msg="Started container" PID=1766 containerID=0e5ebdb6df37117e7338a63f5a8bb23accbffd42749e3c2cbeb3afdb983a2969 description=kube-system/coredns-7d764666f9-7wf76/coredns id=6bab3213-2862-426b-b0d6-3c377af5816e name=/runtime.v1.RuntimeService/StartContainer sandboxID=3cd1d207e4464756b029c4398323c60fad57873615eca6871215eba8c4fb1a5b
	Dec 27 20:53:08 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:08.585920692Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d1471953-f300-41b2-9d9a-6fb7ca476fdc name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:53:08 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:08.585988907Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:53:08 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:08.594285105Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8d17c565cfa4afb330e90665c049f07dbdb9b5503566a73cf2e5988ce2bacd47 UID:064bee55-c240-4433-bde1-87acf5ac8840 NetNS:/var/run/netns/56c2b442-d615-4b72-b9e3-85debe2b0c77 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40021668f0}] Aliases:map[]}"
	Dec 27 20:53:08 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:08.594582024Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 27 20:53:08 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:08.603328482Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8d17c565cfa4afb330e90665c049f07dbdb9b5503566a73cf2e5988ce2bacd47 UID:064bee55-c240-4433-bde1-87acf5ac8840 NetNS:/var/run/netns/56c2b442-d615-4b72-b9e3-85debe2b0c77 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40021668f0}] Aliases:map[]}"
	Dec 27 20:53:08 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:08.603615801Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 27 20:53:08 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:08.607031449Z" level=info msg="Ran pod sandbox 8d17c565cfa4afb330e90665c049f07dbdb9b5503566a73cf2e5988ce2bacd47 with infra container: default/busybox/POD" id=d1471953-f300-41b2-9d9a-6fb7ca476fdc name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:53:08 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:08.60812087Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d3129770-25ab-4b9c-a3f1-8b0e28de735a name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:53:08 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:08.608242179Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d3129770-25ab-4b9c-a3f1-8b0e28de735a name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:53:08 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:08.608283155Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d3129770-25ab-4b9c-a3f1-8b0e28de735a name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:53:08 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:08.609944252Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=68f3d68e-1ad7-4a7c-b0fe-760ba71ee4a8 name=/runtime.v1.ImageService/PullImage
	Dec 27 20:53:08 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:08.612382517Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 20:53:10 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:10.49230203Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=68f3d68e-1ad7-4a7c-b0fe-760ba71ee4a8 name=/runtime.v1.ImageService/PullImage
	Dec 27 20:53:10 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:10.493172168Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f9aab3b3-e86d-482a-ba00-b3223a6e1fcd name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:53:10 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:10.495141408Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ac05655d-55b1-41d8-b634-adc7458d18a0 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:53:10 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:10.502904222Z" level=info msg="Creating container: default/busybox/busybox" id=9c803cf5-a6d4-44f5-a759-9240ef58a7b1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:53:10 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:10.503053912Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:53:10 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:10.507691663Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:53:10 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:10.508355882Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:53:10 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:10.523330938Z" level=info msg="Created container 8d8c2a176f13a33612fdc28089882c3be2a73dd1d94cbf6bd0832d8558a370bf: default/busybox/busybox" id=9c803cf5-a6d4-44f5-a759-9240ef58a7b1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:53:10 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:10.524199721Z" level=info msg="Starting container: 8d8c2a176f13a33612fdc28089882c3be2a73dd1d94cbf6bd0832d8558a370bf" id=c07099fc-11b7-4cb2-aed3-fd52c5f0cba3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:53:10 default-k8s-diff-port-058924 crio[838]: time="2025-12-27T20:53:10.525767964Z" level=info msg="Started container" PID=1824 containerID=8d8c2a176f13a33612fdc28089882c3be2a73dd1d94cbf6bd0832d8558a370bf description=default/busybox/busybox id=c07099fc-11b7-4cb2-aed3-fd52c5f0cba3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8d17c565cfa4afb330e90665c049f07dbdb9b5503566a73cf2e5988ce2bacd47
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	8d8c2a176f13a       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   8d17c565cfa4a       busybox                                                default
	0e5ebdb6df371       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                      12 seconds ago      Running             coredns                   0                   3cd1d207e4464       coredns-7d764666f9-7wf76                               kube-system
	9fa1160ddd254       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   e51cc59b4d308       storage-provisioner                                    kube-system
	d3e95d821318d       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    23 seconds ago      Running             kindnet-cni               0                   854587462c396       kindnet-8clbx                                          kube-system
	49e1dfa9695be       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                      25 seconds ago      Running             kube-proxy                0                   5aa72dda77c79       kube-proxy-m2mtv                                       kube-system
	41cca4fe5e47b       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                      37 seconds ago      Running             kube-scheduler            0                   4071891e855d7       kube-scheduler-default-k8s-diff-port-058924            kube-system
	0ec292fa1dcf7       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                      37 seconds ago      Running             kube-apiserver            0                   6c9bbf9c225d0       kube-apiserver-default-k8s-diff-port-058924            kube-system
	00e45ecca46a3       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                      37 seconds ago      Running             etcd                      0                   7269d3a20a04e       etcd-default-k8s-diff-port-058924                      kube-system
	5adadc2f2ec48       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                      37 seconds ago      Running             kube-controller-manager   0                   2d4a5763c8d6e       kube-controller-manager-default-k8s-diff-port-058924   kube-system
	
	
	==> coredns [0e5ebdb6df37117e7338a63f5a8bb23accbffd42749e3c2cbeb3afdb983a2969] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:33090 - 2885 "HINFO IN 8535599301138925669.6233115740874268775. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015882073s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-058924
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-058924
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=default-k8s-diff-port-058924
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_52_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:52:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-058924
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:53:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:53:17 +0000   Sat, 27 Dec 2025 20:52:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:53:17 +0000   Sat, 27 Dec 2025 20:52:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:53:17 +0000   Sat, 27 Dec 2025 20:52:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:53:17 +0000   Sat, 27 Dec 2025 20:53:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-058924
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                c6cef3de-c29f-4e64-acd9-52f541b38c56
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-7d764666f9-7wf76                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     25s
	  kube-system                 etcd-default-k8s-diff-port-058924                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         31s
	  kube-system                 kindnet-8clbx                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-058924             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-058924    200m (10%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-m2mtv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-058924             100m (5%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node default-k8s-diff-port-058924 event: Registered Node default-k8s-diff-port-058924 in Controller
	
	
	==> dmesg <==
	[Dec27 20:19] overlayfs: idmapped layers are currently not supported
	[ +36.244108] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 20:22] overlayfs: idmapped layers are currently not supported
	[Dec27 20:23] overlayfs: idmapped layers are currently not supported
	[Dec27 20:24] overlayfs: idmapped layers are currently not supported
	[Dec27 20:25] overlayfs: idmapped layers are currently not supported
	[ +35.447549] overlayfs: idmapped layers are currently not supported
	[Dec27 20:26] overlayfs: idmapped layers are currently not supported
	[Dec27 20:27] overlayfs: idmapped layers are currently not supported
	[  +6.770645] overlayfs: idmapped layers are currently not supported
	[Dec27 20:28] overlayfs: idmapped layers are currently not supported
	[ +25.872751] overlayfs: idmapped layers are currently not supported
	[Dec27 20:29] overlayfs: idmapped layers are currently not supported
	[ +32.997137] overlayfs: idmapped layers are currently not supported
	[Dec27 20:31] overlayfs: idmapped layers are currently not supported
	[Dec27 20:33] overlayfs: idmapped layers are currently not supported
	[ +33.772475] overlayfs: idmapped layers are currently not supported
	[Dec27 20:39] overlayfs: idmapped layers are currently not supported
	[Dec27 20:40] overlayfs: idmapped layers are currently not supported
	[Dec27 20:44] overlayfs: idmapped layers are currently not supported
	[Dec27 20:45] overlayfs: idmapped layers are currently not supported
	[Dec27 20:49] overlayfs: idmapped layers are currently not supported
	[Dec27 20:50] overlayfs: idmapped layers are currently not supported
	[Dec27 20:51] overlayfs: idmapped layers are currently not supported
	[Dec27 20:52] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [00e45ecca46a3cd1d26a071785d02c2ccdf876be149fa9d5cd305396678d85ec] <==
	{"level":"info","ts":"2025-12-27T20:52:40.737659Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T20:52:41.703668Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T20:52:41.703777Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T20:52:41.703851Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-27T20:52:41.703905Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:52:41.703947Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:52:41.704980Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:52:41.705024Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:52:41.705061Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-27T20:52:41.705096Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:52:41.709105Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-diff-port-058924 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:52:41.709186Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:52:41.708957Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:52:41.709569Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:52:41.711790Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:52:41.728301Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:52:41.728418Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:52:41.729104Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:52:41.730501Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:52:41.731680Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:52:41.731755Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:52:41.734342Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T20:52:41.737511Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T20:52:41.738763Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:52:41.739537Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 20:53:17 up  2:35,  0 user,  load average: 1.56, 1.69, 1.83
	Linux default-k8s-diff-port-058924 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d3e95d821318d8e3d74d81fdd00965f1e915bd6349006e4b963d2d7ce38233cc] <==
	I1227 20:52:54.438030       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:52:54.438497       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 20:52:54.438649       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:52:54.438667       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:52:54.438680       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:52:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:52:54.730708       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:52:54.737896       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:52:54.737988       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:52:54.738169       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 20:52:54.929638       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:52:54.929737       1 metrics.go:72] Registering metrics
	I1227 20:52:54.929823       1 controller.go:711] "Syncing nftables rules"
	I1227 20:53:04.731952       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:53:04.732001       1 main.go:301] handling current node
	I1227 20:53:14.733535       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:53:14.733574       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0ec292fa1dcf713ab4be13d4666a89f36cdfde11208c94ae30e35252ce35b354] <==
	E1227 20:52:44.115838       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1227 20:52:44.117599       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 20:52:44.120712       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:52:44.121829       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:52:44.121927       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 20:52:44.162990       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:52:44.316384       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:52:44.769156       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 20:52:44.777337       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 20:52:44.777423       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:52:45.617409       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:52:45.668593       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:52:45.775845       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 20:52:45.785100       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1227 20:52:45.786264       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:52:45.791167       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:52:45.976944       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:52:46.595997       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:52:46.613534       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 20:52:46.638047       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 20:52:51.482253       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:52:51.633878       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1227 20:52:51.756735       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:52:51.786444       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1227 20:53:16.394949       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:36090: use of closed network connection
	
	
	==> kube-controller-manager [5adadc2f2ec48461e49d409e3d0d497841411ff6f5b0c25dec00a3cf8bc82b54] <==
	I1227 20:52:50.804067       1 shared_informer.go:377] "Caches are synced"
	I1227 20:52:50.804073       1 shared_informer.go:377] "Caches are synced"
	I1227 20:52:50.804079       1 shared_informer.go:377] "Caches are synced"
	I1227 20:52:50.804084       1 shared_informer.go:377] "Caches are synced"
	I1227 20:52:50.804095       1 shared_informer.go:377] "Caches are synced"
	I1227 20:52:50.804102       1 shared_informer.go:377] "Caches are synced"
	I1227 20:52:50.804108       1 shared_informer.go:377] "Caches are synced"
	I1227 20:52:50.804121       1 shared_informer.go:377] "Caches are synced"
	I1227 20:52:50.804128       1 shared_informer.go:377] "Caches are synced"
	I1227 20:52:50.804146       1 shared_informer.go:377] "Caches are synced"
	I1227 20:52:50.804151       1 shared_informer.go:377] "Caches are synced"
	I1227 20:52:50.804156       1 shared_informer.go:377] "Caches are synced"
	I1227 20:52:50.804163       1 shared_informer.go:377] "Caches are synced"
	I1227 20:52:50.804178       1 shared_informer.go:377] "Caches are synced"
	I1227 20:52:50.804185       1 shared_informer.go:377] "Caches are synced"
	I1227 20:52:50.804192       1 shared_informer.go:377] "Caches are synced"
	I1227 20:52:50.804197       1 shared_informer.go:377] "Caches are synced"
	I1227 20:52:50.828490       1 range_allocator.go:433] "Set node PodCIDR" node="default-k8s-diff-port-058924" podCIDRs=["10.244.0.0/24"]
	I1227 20:52:50.840089       1 shared_informer.go:377] "Caches are synced"
	I1227 20:52:50.851035       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:52:50.902904       1 shared_informer.go:377] "Caches are synced"
	I1227 20:52:50.902934       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:52:50.902942       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:52:50.951766       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:06.050187       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [49e1dfa9695be014a9bbb6af1c453f195b0c696e2f1d10a7259894132a2763c6] <==
	I1227 20:52:52.371503       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:52:52.459179       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:52:52.561280       1 shared_informer.go:377] "Caches are synced"
	I1227 20:52:52.561322       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 20:52:52.561412       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:52:52.599458       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:52:52.599510       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:52:52.609595       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:52:52.609882       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:52:52.609898       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:52:52.611251       1 config.go:200] "Starting service config controller"
	I1227 20:52:52.611262       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:52:52.611282       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:52:52.611286       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:52:52.611296       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:52:52.611306       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:52:52.611949       1 config.go:309] "Starting node config controller"
	I1227 20:52:52.611957       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:52:52.611963       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:52:52.711878       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 20:52:52.711910       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:52:52.711936       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [41cca4fe5e47b5a07491ed11df5dd4ce75bc806e19e3c282d5f68cf2188803b0] <==
	E1227 20:52:44.086909       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:52:44.086950       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 20:52:44.087009       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 20:52:44.087057       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:52:44.087108       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:52:44.087364       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:52:44.087424       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:52:44.087470       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 20:52:44.087534       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 20:52:44.087577       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:52:44.087637       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 20:52:44.088730       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:52:44.089256       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:52:45.069706       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 20:52:45.084228       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 20:52:45.095661       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:52:45.119848       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:52:45.222493       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:52:45.253486       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 20:52:45.279791       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:52:45.308971       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:52:45.334507       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 20:52:45.360570       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:52:45.364052       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	I1227 20:52:48.154348       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:52:51 default-k8s-diff-port-058924 kubelet[1298]: I1227 20:52:51.794297    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a53eca44-5c16-4e1c-b208-061922a489d6-lib-modules\") pod \"kindnet-8clbx\" (UID: \"a53eca44-5c16-4e1c-b208-061922a489d6\") " pod="kube-system/kindnet-8clbx"
	Dec 27 20:52:51 default-k8s-diff-port-058924 kubelet[1298]: I1227 20:52:51.794360    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b85165f6-d028-4fd5-92e8-e1b227aa2270-lib-modules\") pod \"kube-proxy-m2mtv\" (UID: \"b85165f6-d028-4fd5-92e8-e1b227aa2270\") " pod="kube-system/kube-proxy-m2mtv"
	Dec 27 20:52:51 default-k8s-diff-port-058924 kubelet[1298]: I1227 20:52:51.949205    1298 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 27 20:52:52 default-k8s-diff-port-058924 kubelet[1298]: W1227 20:52:52.047997    1298 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436/crio-854587462c3963a3597cd8befb61956cda74f0f2694183550c3d305b3e976977 WatchSource:0}: Error finding container 854587462c3963a3597cd8befb61956cda74f0f2694183550c3d305b3e976977: Status 404 returned error can't find the container with id 854587462c3963a3597cd8befb61956cda74f0f2694183550c3d305b3e976977
	Dec 27 20:52:54 default-k8s-diff-port-058924 kubelet[1298]: I1227 20:52:54.717123    1298 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-m2mtv" podStartSLOduration=3.717107582 podStartE2EDuration="3.717107582s" podCreationTimestamp="2025-12-27 20:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:52:52.701121751 +0000 UTC m=+6.270121172" watchObservedRunningTime="2025-12-27 20:52:54.717107582 +0000 UTC m=+8.286107011"
	Dec 27 20:52:55 default-k8s-diff-port-058924 kubelet[1298]: E1227 20:52:55.178929    1298 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-058924" containerName="kube-scheduler"
	Dec 27 20:52:55 default-k8s-diff-port-058924 kubelet[1298]: I1227 20:52:55.192961    1298 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-8clbx" podStartSLOduration=1.909103826 podStartE2EDuration="4.192928391s" podCreationTimestamp="2025-12-27 20:52:51 +0000 UTC" firstStartedPulling="2025-12-27 20:52:52.060736785 +0000 UTC m=+5.629736205" lastFinishedPulling="2025-12-27 20:52:54.344561349 +0000 UTC m=+7.913560770" observedRunningTime="2025-12-27 20:52:54.718338809 +0000 UTC m=+8.287338238" watchObservedRunningTime="2025-12-27 20:52:55.192928391 +0000 UTC m=+8.761927811"
	Dec 27 20:52:56 default-k8s-diff-port-058924 kubelet[1298]: E1227 20:52:56.411734    1298 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-058924" containerName="kube-controller-manager"
	Dec 27 20:53:00 default-k8s-diff-port-058924 kubelet[1298]: E1227 20:53:00.614626    1298 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-058924" containerName="etcd"
	Dec 27 20:53:01 default-k8s-diff-port-058924 kubelet[1298]: E1227 20:53:01.426042    1298 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-058924" containerName="kube-apiserver"
	Dec 27 20:53:05 default-k8s-diff-port-058924 kubelet[1298]: I1227 20:53:05.175386    1298 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 27 20:53:05 default-k8s-diff-port-058924 kubelet[1298]: E1227 20:53:05.190905    1298 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-058924" containerName="kube-scheduler"
	Dec 27 20:53:05 default-k8s-diff-port-058924 kubelet[1298]: I1227 20:53:05.335073    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e205de3f-3506-425b-a039-4dfa897cf8f9-tmp\") pod \"storage-provisioner\" (UID: \"e205de3f-3506-425b-a039-4dfa897cf8f9\") " pod="kube-system/storage-provisioner"
	Dec 27 20:53:05 default-k8s-diff-port-058924 kubelet[1298]: I1227 20:53:05.335150    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmxdd\" (UniqueName: \"kubernetes.io/projected/e205de3f-3506-425b-a039-4dfa897cf8f9-kube-api-access-dmxdd\") pod \"storage-provisioner\" (UID: \"e205de3f-3506-425b-a039-4dfa897cf8f9\") " pod="kube-system/storage-provisioner"
	Dec 27 20:53:05 default-k8s-diff-port-058924 kubelet[1298]: I1227 20:53:05.335205    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzchd\" (UniqueName: \"kubernetes.io/projected/14f9ecf5-c5b1-4458-bce4-18c5f12a447a-kube-api-access-xzchd\") pod \"coredns-7d764666f9-7wf76\" (UID: \"14f9ecf5-c5b1-4458-bce4-18c5f12a447a\") " pod="kube-system/coredns-7d764666f9-7wf76"
	Dec 27 20:53:05 default-k8s-diff-port-058924 kubelet[1298]: I1227 20:53:05.335242    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14f9ecf5-c5b1-4458-bce4-18c5f12a447a-config-volume\") pod \"coredns-7d764666f9-7wf76\" (UID: \"14f9ecf5-c5b1-4458-bce4-18c5f12a447a\") " pod="kube-system/coredns-7d764666f9-7wf76"
	Dec 27 20:53:05 default-k8s-diff-port-058924 kubelet[1298]: E1227 20:53:05.725155    1298 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-7wf76" containerName="coredns"
	Dec 27 20:53:05 default-k8s-diff-port-058924 kubelet[1298]: I1227 20:53:05.766073    1298 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-7wf76" podStartSLOduration=13.766056843 podStartE2EDuration="13.766056843s" podCreationTimestamp="2025-12-27 20:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:53:05.74450655 +0000 UTC m=+19.313505970" watchObservedRunningTime="2025-12-27 20:53:05.766056843 +0000 UTC m=+19.335056264"
	Dec 27 20:53:05 default-k8s-diff-port-058924 kubelet[1298]: I1227 20:53:05.795483    1298 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.79546059 podStartE2EDuration="12.79546059s" podCreationTimestamp="2025-12-27 20:52:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:53:05.767375607 +0000 UTC m=+19.336375028" watchObservedRunningTime="2025-12-27 20:53:05.79546059 +0000 UTC m=+19.364460019"
	Dec 27 20:53:06 default-k8s-diff-port-058924 kubelet[1298]: E1227 20:53:06.420199    1298 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-058924" containerName="kube-controller-manager"
	Dec 27 20:53:06 default-k8s-diff-port-058924 kubelet[1298]: E1227 20:53:06.730809    1298 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-7wf76" containerName="coredns"
	Dec 27 20:53:07 default-k8s-diff-port-058924 kubelet[1298]: E1227 20:53:07.733313    1298 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-7wf76" containerName="coredns"
	Dec 27 20:53:08 default-k8s-diff-port-058924 kubelet[1298]: I1227 20:53:08.358540    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfxgq\" (UniqueName: \"kubernetes.io/projected/064bee55-c240-4433-bde1-87acf5ac8840-kube-api-access-pfxgq\") pod \"busybox\" (UID: \"064bee55-c240-4433-bde1-87acf5ac8840\") " pod="default/busybox"
	Dec 27 20:53:08 default-k8s-diff-port-058924 kubelet[1298]: W1227 20:53:08.605350    1298 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436/crio-8d17c565cfa4afb330e90665c049f07dbdb9b5503566a73cf2e5988ce2bacd47 WatchSource:0}: Error finding container 8d17c565cfa4afb330e90665c049f07dbdb9b5503566a73cf2e5988ce2bacd47: Status 404 returned error can't find the container with id 8d17c565cfa4afb330e90665c049f07dbdb9b5503566a73cf2e5988ce2bacd47
	Dec 27 20:53:10 default-k8s-diff-port-058924 kubelet[1298]: I1227 20:53:10.755702    1298 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.870327107 podStartE2EDuration="2.755687803s" podCreationTimestamp="2025-12-27 20:53:08 +0000 UTC" firstStartedPulling="2025-12-27 20:53:08.608608249 +0000 UTC m=+22.177607670" lastFinishedPulling="2025-12-27 20:53:10.493968937 +0000 UTC m=+24.062968366" observedRunningTime="2025-12-27 20:53:10.755458246 +0000 UTC m=+24.324457675" watchObservedRunningTime="2025-12-27 20:53:10.755687803 +0000 UTC m=+24.324687240"
	
	
	==> storage-provisioner [9fa1160ddd254b226f5e29f0cae3a8af598abe7b9e80aad8e4191e309a4216e1] <==
	I1227 20:53:05.622663       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 20:53:05.648278       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:53:05.648417       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 20:53:05.653609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:53:05.660465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:53:05.660702       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:53:05.662391       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"88bc2d5f-416e-4b56-8bda-cffc96e439d9", APIVersion:"v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-058924_ae17b7c5-d68a-4a00-8760-79fd42da72fd became leader
	I1227 20:53:05.662569       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-058924_ae17b7c5-d68a-4a00-8760-79fd42da72fd!
	W1227 20:53:05.670607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:53:05.678004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:53:05.765419       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-058924_ae17b7c5-d68a-4a00-8760-79fd42da72fd!
	W1227 20:53:07.680795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:53:07.685572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:53:09.688523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:53:09.692568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:53:11.695382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:53:11.699637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:53:13.702331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:53:13.706724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:53:15.709744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:53:15.713912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:53:17.719062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:53:17.724221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-058924 -n default-k8s-diff-port-058924
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-058924 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-058924 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-058924 --alsologtostderr -v=1: exit status 80 (2.01717148s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-058924 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:54:38.126697  495198 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:54:38.126909  495198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:54:38.126921  495198 out.go:374] Setting ErrFile to fd 2...
	I1227 20:54:38.126928  495198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:54:38.127497  495198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:54:38.128098  495198 out.go:368] Setting JSON to false
	I1227 20:54:38.128129  495198 mustload.go:66] Loading cluster: default-k8s-diff-port-058924
	I1227 20:54:38.129086  495198 config.go:182] Loaded profile config "default-k8s-diff-port-058924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:54:38.130257  495198 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-058924 --format={{.State.Status}}
	I1227 20:54:38.148268  495198 host.go:66] Checking if "default-k8s-diff-port-058924" exists ...
	I1227 20:54:38.148790  495198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:54:38.227700  495198 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-27 20:54:38.210666839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:54:38.228497  495198 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22332/minikube-v1.37.0-1766811082-22332-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766811082-22332/minikube-v1.37.0-1766811082-22332-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766811082-22332-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:default-k8s-diff-port-058924 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarni
ng:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 20:54:38.231429  495198 out.go:179] * Pausing node default-k8s-diff-port-058924 ... 
	I1227 20:54:38.234479  495198 host.go:66] Checking if "default-k8s-diff-port-058924" exists ...
	I1227 20:54:38.234863  495198 ssh_runner.go:195] Run: systemctl --version
	I1227 20:54:38.234937  495198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:54:38.258886  495198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:54:38.367973  495198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:54:38.394705  495198 pause.go:52] kubelet running: true
	I1227 20:54:38.394778  495198 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:54:38.600980  495198 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:54:38.601059  495198 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:54:38.667192  495198 cri.go:96] found id: "947ffc68c455a66d413ac0e1a88286d4999d302eb9862e8dbaa8bdca4e9962f5"
	I1227 20:54:38.667216  495198 cri.go:96] found id: "ec52181844fe079b145958d700f1e1fe5dcf49cb2b9cc8579e21772009c71688"
	I1227 20:54:38.667221  495198 cri.go:96] found id: "38227fcb6aa735a8d71a2d3a9cbddcc31e2b4e9f35fd1f6705090de387c7487f"
	I1227 20:54:38.667225  495198 cri.go:96] found id: "4a06bde369d1ed3ed4adcb37795d14e35cc6d700346f1d8735f87fcc17f5acbd"
	I1227 20:54:38.667228  495198 cri.go:96] found id: "84c8ca768685ecc278641733fdb714032c8f936c2c0791b5cd5d1aa930606977"
	I1227 20:54:38.667231  495198 cri.go:96] found id: "3f3183a409a49620f674f6cfc989ab37a01f933a3e0df3233dd981b0090f1459"
	I1227 20:54:38.667235  495198 cri.go:96] found id: "8fd1fa375b3beeeb401d83d95cfb1937027fe068753413aea7fd26b9455cd5a0"
	I1227 20:54:38.667238  495198 cri.go:96] found id: "59e2430d76d4f0fe1a76d346425874373d6f2997b74d9256e8d9c44383dd8e9c"
	I1227 20:54:38.667242  495198 cri.go:96] found id: "0284e6610331862ca822c0a398c4ebd7e2be94ee4e5d1cb1536b5202a533ba8b"
	I1227 20:54:38.667248  495198 cri.go:96] found id: "37c4318d86e79f02bbe58773a1a5fb7cfc6d4907577ccae135cbebe491459fd3"
	I1227 20:54:38.667271  495198 cri.go:96] found id: "cf047da0f34ba382065ad57a1c14715622f4e810d1c6aa2686329d2f26c64f9d"
	I1227 20:54:38.667284  495198 cri.go:96] found id: ""
	I1227 20:54:38.667345  495198 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:54:38.678254  495198 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:54:38Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:54:38.955839  495198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:54:38.972429  495198 pause.go:52] kubelet running: false
	I1227 20:54:38.972545  495198 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:54:39.180291  495198 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:54:39.180420  495198 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:54:39.248138  495198 cri.go:96] found id: "947ffc68c455a66d413ac0e1a88286d4999d302eb9862e8dbaa8bdca4e9962f5"
	I1227 20:54:39.248173  495198 cri.go:96] found id: "ec52181844fe079b145958d700f1e1fe5dcf49cb2b9cc8579e21772009c71688"
	I1227 20:54:39.248178  495198 cri.go:96] found id: "38227fcb6aa735a8d71a2d3a9cbddcc31e2b4e9f35fd1f6705090de387c7487f"
	I1227 20:54:39.248181  495198 cri.go:96] found id: "4a06bde369d1ed3ed4adcb37795d14e35cc6d700346f1d8735f87fcc17f5acbd"
	I1227 20:54:39.248206  495198 cri.go:96] found id: "84c8ca768685ecc278641733fdb714032c8f936c2c0791b5cd5d1aa930606977"
	I1227 20:54:39.248217  495198 cri.go:96] found id: "3f3183a409a49620f674f6cfc989ab37a01f933a3e0df3233dd981b0090f1459"
	I1227 20:54:39.248220  495198 cri.go:96] found id: "8fd1fa375b3beeeb401d83d95cfb1937027fe068753413aea7fd26b9455cd5a0"
	I1227 20:54:39.248223  495198 cri.go:96] found id: "59e2430d76d4f0fe1a76d346425874373d6f2997b74d9256e8d9c44383dd8e9c"
	I1227 20:54:39.248227  495198 cri.go:96] found id: "0284e6610331862ca822c0a398c4ebd7e2be94ee4e5d1cb1536b5202a533ba8b"
	I1227 20:54:39.248233  495198 cri.go:96] found id: "37c4318d86e79f02bbe58773a1a5fb7cfc6d4907577ccae135cbebe491459fd3"
	I1227 20:54:39.248236  495198 cri.go:96] found id: "cf047da0f34ba382065ad57a1c14715622f4e810d1c6aa2686329d2f26c64f9d"
	I1227 20:54:39.248239  495198 cri.go:96] found id: ""
	I1227 20:54:39.248309  495198 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:54:39.718440  495198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:54:39.736847  495198 pause.go:52] kubelet running: false
	I1227 20:54:39.736933  495198 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:54:39.933828  495198 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:54:39.933910  495198 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:54:40.028251  495198 cri.go:96] found id: "947ffc68c455a66d413ac0e1a88286d4999d302eb9862e8dbaa8bdca4e9962f5"
	I1227 20:54:40.028274  495198 cri.go:96] found id: "ec52181844fe079b145958d700f1e1fe5dcf49cb2b9cc8579e21772009c71688"
	I1227 20:54:40.028279  495198 cri.go:96] found id: "38227fcb6aa735a8d71a2d3a9cbddcc31e2b4e9f35fd1f6705090de387c7487f"
	I1227 20:54:40.028282  495198 cri.go:96] found id: "4a06bde369d1ed3ed4adcb37795d14e35cc6d700346f1d8735f87fcc17f5acbd"
	I1227 20:54:40.028286  495198 cri.go:96] found id: "84c8ca768685ecc278641733fdb714032c8f936c2c0791b5cd5d1aa930606977"
	I1227 20:54:40.028290  495198 cri.go:96] found id: "3f3183a409a49620f674f6cfc989ab37a01f933a3e0df3233dd981b0090f1459"
	I1227 20:54:40.028294  495198 cri.go:96] found id: "8fd1fa375b3beeeb401d83d95cfb1937027fe068753413aea7fd26b9455cd5a0"
	I1227 20:54:40.028297  495198 cri.go:96] found id: "59e2430d76d4f0fe1a76d346425874373d6f2997b74d9256e8d9c44383dd8e9c"
	I1227 20:54:40.028300  495198 cri.go:96] found id: "0284e6610331862ca822c0a398c4ebd7e2be94ee4e5d1cb1536b5202a533ba8b"
	I1227 20:54:40.028314  495198 cri.go:96] found id: "37c4318d86e79f02bbe58773a1a5fb7cfc6d4907577ccae135cbebe491459fd3"
	I1227 20:54:40.028319  495198 cri.go:96] found id: "cf047da0f34ba382065ad57a1c14715622f4e810d1c6aa2686329d2f26c64f9d"
	I1227 20:54:40.028327  495198 cri.go:96] found id: ""
	I1227 20:54:40.028391  495198 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:54:40.082823  495198 out.go:203] 
	W1227 20:54:40.084365  495198 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:54:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:54:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 20:54:40.084465  495198 out.go:285] * 
	* 
	W1227 20:54:40.088610  495198 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:54:40.089965  495198 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-058924 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-058924
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-058924:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436",
	        "Created": "2025-12-27T20:52:26.32228828Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 492584,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:53:31.469313347Z",
	            "FinishedAt": "2025-12-27T20:53:30.672195745Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436/hostname",
	        "HostsPath": "/var/lib/docker/containers/14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436/hosts",
	        "LogPath": "/var/lib/docker/containers/14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436/14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436-json.log",
	        "Name": "/default-k8s-diff-port-058924",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-058924:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-058924",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436",
	                "LowerDir": "/var/lib/docker/overlay2/1705b32d6f7b3acd21037f84bc864fcd3368266ae22d9d1ff6c6114e626d27cd-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1705b32d6f7b3acd21037f84bc864fcd3368266ae22d9d1ff6c6114e626d27cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1705b32d6f7b3acd21037f84bc864fcd3368266ae22d9d1ff6c6114e626d27cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1705b32d6f7b3acd21037f84bc864fcd3368266ae22d9d1ff6c6114e626d27cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-058924",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-058924/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-058924",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-058924",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-058924",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "70000734e7bf52e061d1ee9fdded19dbf00c4137d4d68834643c5c391f0fcc64",
	            "SandboxKey": "/var/run/docker/netns/70000734e7bf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-058924": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:e8:d7:40:0a:cf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4cf559b41345f8593676aae308d8407a6052ba110f51cbc56967a3187eac038b",
	                    "EndpointID": "99bbf9f671ba7ac53f1e6e70ad418d8a09f033712a396d3406973081bd2355fc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-058924",
	                        "14a8831f1ae2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-058924 -n default-k8s-diff-port-058924
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-058924 -n default-k8s-diff-port-058924: exit status 2 (373.09214ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-058924 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-058924 logs -n 25: (1.241394415s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-629954 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-629954       │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │ 27 Dec 25 20:45 UTC │
	│ start   │ -p cert-expiration-629954 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-629954       │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │ 27 Dec 25 20:48 UTC │
	│ delete  │ -p cert-expiration-629954                                                                                                                                                                                                                     │ cert-expiration-629954       │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │ 27 Dec 25 20:48 UTC │
	│ start   │ -p force-systemd-flag-604544 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-604544    │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │                     │
	│ delete  │ -p force-systemd-env-859716                                                                                                                                                                                                                   │ force-systemd-env-859716     │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ start   │ -p cert-options-765175 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-765175          │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ ssh     │ cert-options-765175 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-765175          │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ ssh     │ -p cert-options-765175 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-765175          │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ delete  │ -p cert-options-765175                                                                                                                                                                                                                        │ cert-options-765175          │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ start   │ -p old-k8s-version-855707 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-855707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │                     │
	│ stop    │ -p old-k8s-version-855707 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │ 27 Dec 25 20:51 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-855707 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:51 UTC │ 27 Dec 25 20:51 UTC │
	│ start   │ -p old-k8s-version-855707 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:51 UTC │ 27 Dec 25 20:52 UTC │
	│ image   │ old-k8s-version-855707 image list --format=json                                                                                                                                                                                               │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ pause   │ -p old-k8s-version-855707 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │                     │
	│ delete  │ -p old-k8s-version-855707                                                                                                                                                                                                                     │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ delete  │ -p old-k8s-version-855707                                                                                                                                                                                                                     │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ start   │ -p default-k8s-diff-port-058924 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:53 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-058924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-058924 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-058924 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
	│ start   │ -p default-k8s-diff-port-058924 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:54 UTC │
	│ image   │ default-k8s-diff-port-058924 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ pause   │ -p default-k8s-diff-port-058924 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:53:31
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:53:31.185838  492447 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:53:31.186024  492447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:53:31.186055  492447 out.go:374] Setting ErrFile to fd 2...
	I1227 20:53:31.186078  492447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:53:31.186442  492447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:53:31.186920  492447 out.go:368] Setting JSON to false
	I1227 20:53:31.187822  492447 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9364,"bootTime":1766859448,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:53:31.187943  492447 start.go:143] virtualization:  
	I1227 20:53:31.190883  492447 out.go:179] * [default-k8s-diff-port-058924] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:53:31.193069  492447 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:53:31.193175  492447 notify.go:221] Checking for updates...
	I1227 20:53:31.198706  492447 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:53:31.201472  492447 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:53:31.204289  492447 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:53:31.207278  492447 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:53:31.210291  492447 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:53:31.213776  492447 config.go:182] Loaded profile config "default-k8s-diff-port-058924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:53:31.214342  492447 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:53:31.250996  492447 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:53:31.251102  492447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:53:31.311097  492447 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:53:31.302138053 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:53:31.311207  492447 docker.go:319] overlay module found
	I1227 20:53:31.314411  492447 out.go:179] * Using the docker driver based on existing profile
	I1227 20:53:31.317256  492447 start.go:309] selected driver: docker
	I1227 20:53:31.317276  492447 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-058924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-058924 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:53:31.317395  492447 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:53:31.318103  492447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:53:31.378936  492447 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:53:31.369439673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:53:31.379272  492447 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:53:31.379307  492447 cni.go:84] Creating CNI manager for ""
	I1227 20:53:31.379363  492447 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:53:31.379403  492447 start.go:353] cluster config:
	{Name:default-k8s-diff-port-058924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-058924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:53:31.384358  492447 out.go:179] * Starting "default-k8s-diff-port-058924" primary control-plane node in "default-k8s-diff-port-058924" cluster
	I1227 20:53:31.387085  492447 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:53:31.389893  492447 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:53:31.392527  492447 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:53:31.392569  492447 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:53:31.392583  492447 cache.go:65] Caching tarball of preloaded images
	I1227 20:53:31.392618  492447 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:53:31.392667  492447 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:53:31.392677  492447 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:53:31.392789  492447 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/config.json ...
	I1227 20:53:31.411530  492447 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:53:31.411555  492447 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:53:31.411570  492447 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:53:31.411598  492447 start.go:360] acquireMachinesLock for default-k8s-diff-port-058924: {Name:mk1f359d7e6bf82a20b5c0ba5278536cffac40ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:53:31.411659  492447 start.go:364] duration metric: took 36.816µs to acquireMachinesLock for "default-k8s-diff-port-058924"
	I1227 20:53:31.411683  492447 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:53:31.411693  492447 fix.go:54] fixHost starting: 
	I1227 20:53:31.411960  492447 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-058924 --format={{.State.Status}}
	I1227 20:53:31.433060  492447 fix.go:112] recreateIfNeeded on default-k8s-diff-port-058924: state=Stopped err=<nil>
	W1227 20:53:31.433090  492447 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:53:31.436212  492447 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-058924" ...
	I1227 20:53:31.436281  492447 cli_runner.go:164] Run: docker start default-k8s-diff-port-058924
	I1227 20:53:31.700299  492447 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-058924 --format={{.State.Status}}
	I1227 20:53:31.722974  492447 kic.go:430] container "default-k8s-diff-port-058924" state is running.
	I1227 20:53:31.723610  492447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-058924
	I1227 20:53:31.744165  492447 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/config.json ...
	I1227 20:53:31.744386  492447 machine.go:94] provisionDockerMachine start ...
	I1227 20:53:31.744451  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:31.767933  492447 main.go:144] libmachine: Using SSH client type: native
	I1227 20:53:31.768262  492447 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1227 20:53:31.768271  492447 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:53:31.768864  492447 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46624->127.0.0.1:33423: read: connection reset by peer
	I1227 20:53:34.908806  492447 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-058924
	
	I1227 20:53:34.908830  492447 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-058924"
	I1227 20:53:34.908894  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:34.926356  492447 main.go:144] libmachine: Using SSH client type: native
	I1227 20:53:34.926679  492447 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1227 20:53:34.926696  492447 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-058924 && echo "default-k8s-diff-port-058924" | sudo tee /etc/hostname
	I1227 20:53:35.075133  492447 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-058924
	
	I1227 20:53:35.075226  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:35.093219  492447 main.go:144] libmachine: Using SSH client type: native
	I1227 20:53:35.093566  492447 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1227 20:53:35.093586  492447 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-058924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-058924/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-058924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:53:35.239741  492447 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:53:35.239812  492447 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:53:35.239849  492447 ubuntu.go:190] setting up certificates
	I1227 20:53:35.239872  492447 provision.go:84] configureAuth start
	I1227 20:53:35.239947  492447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-058924
	I1227 20:53:35.270500  492447 provision.go:143] copyHostCerts
	I1227 20:53:35.270576  492447 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:53:35.270598  492447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:53:35.270679  492447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:53:35.270773  492447 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:53:35.270778  492447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:53:35.270804  492447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:53:35.270855  492447 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:53:35.270860  492447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:53:35.270882  492447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:53:35.270930  492447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-058924 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-058924 localhost minikube]
	I1227 20:53:35.581301  492447 provision.go:177] copyRemoteCerts
	I1227 20:53:35.581382  492447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:53:35.581423  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:35.601388  492447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:53:35.702982  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1227 20:53:35.721134  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:53:35.739181  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:53:35.756049  492447 provision.go:87] duration metric: took 516.142273ms to configureAuth
	I1227 20:53:35.756080  492447 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:53:35.756276  492447 config.go:182] Loaded profile config "default-k8s-diff-port-058924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:53:35.756380  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:35.774363  492447 main.go:144] libmachine: Using SSH client type: native
	I1227 20:53:35.774687  492447 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1227 20:53:35.774708  492447 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:53:36.141795  492447 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:53:36.141818  492447 machine.go:97] duration metric: took 4.397423053s to provisionDockerMachine
	I1227 20:53:36.141830  492447 start.go:293] postStartSetup for "default-k8s-diff-port-058924" (driver="docker")
	I1227 20:53:36.141841  492447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:53:36.141905  492447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:53:36.141943  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:36.163164  492447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:53:36.261051  492447 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:53:36.264376  492447 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:53:36.264406  492447 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:53:36.264435  492447 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:53:36.264495  492447 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:53:36.264628  492447 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:53:36.264741  492447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:53:36.272125  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:53:36.289248  492447 start.go:296] duration metric: took 147.401066ms for postStartSetup
	I1227 20:53:36.289325  492447 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:53:36.289372  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:36.306092  492447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:53:36.402304  492447 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:53:36.406910  492447 fix.go:56] duration metric: took 4.99521019s for fixHost
	I1227 20:53:36.406938  492447 start.go:83] releasing machines lock for "default-k8s-diff-port-058924", held for 4.995265983s
	I1227 20:53:36.407010  492447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-058924
	I1227 20:53:36.424018  492447 ssh_runner.go:195] Run: cat /version.json
	I1227 20:53:36.424069  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:36.424337  492447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:53:36.424387  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:36.446667  492447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:53:36.446699  492447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:53:36.629286  492447 ssh_runner.go:195] Run: systemctl --version
	I1227 20:53:36.635571  492447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:53:36.670656  492447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:53:36.674910  492447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:53:36.674980  492447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:53:36.683005  492447 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:53:36.683031  492447 start.go:496] detecting cgroup driver to use...
	I1227 20:53:36.683063  492447 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:53:36.683116  492447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:53:36.698014  492447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:53:36.711706  492447 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:53:36.711819  492447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:53:36.730519  492447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:53:36.744526  492447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:53:36.861739  492447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:53:36.975727  492447 docker.go:234] disabling docker service ...
	I1227 20:53:36.975860  492447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:53:36.989905  492447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:53:37.002617  492447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:53:37.132847  492447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:53:37.244838  492447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:53:37.259473  492447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:53:37.273282  492447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:53:37.273343  492447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:53:37.282848  492447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:53:37.282910  492447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:53:37.291299  492447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:53:37.299854  492447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:53:37.308145  492447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:53:37.316225  492447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:53:37.324466  492447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:53:37.332338  492447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:53:37.340733  492447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:53:37.348394  492447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:53:37.355687  492447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:53:37.474678  492447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:53:37.657184  492447 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:53:37.657255  492447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:53:37.661099  492447 start.go:574] Will wait 60s for crictl version
	I1227 20:53:37.661159  492447 ssh_runner.go:195] Run: which crictl
	I1227 20:53:37.664554  492447 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:53:37.688108  492447 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:53:37.688198  492447 ssh_runner.go:195] Run: crio --version
	I1227 20:53:37.715843  492447 ssh_runner.go:195] Run: crio --version
	I1227 20:53:37.746794  492447 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:53:37.749650  492447 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-058924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:53:37.765756  492447 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 20:53:37.769765  492447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:53:37.779922  492447 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-058924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-058924 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:53:37.780049  492447 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:53:37.780105  492447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:53:37.814232  492447 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:53:37.814253  492447 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:53:37.814305  492447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:53:37.838484  492447 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:53:37.838509  492447 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:53:37.838517  492447 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.35.0 crio true true} ...
	I1227 20:53:37.838628  492447 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-058924 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-058924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:53:37.838726  492447 ssh_runner.go:195] Run: crio config
	I1227 20:53:37.892957  492447 cni.go:84] Creating CNI manager for ""
	I1227 20:53:37.892977  492447 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:53:37.892991  492447 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:53:37.893021  492447 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-058924 NodeName:default-k8s-diff-port-058924 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:53:37.893154  492447 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-058924"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:53:37.893224  492447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:53:37.900897  492447 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:53:37.900971  492447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:53:37.908125  492447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1227 20:53:37.920217  492447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:53:37.931973  492447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1227 20:53:37.944636  492447 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:53:37.948118  492447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:53:37.957313  492447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:53:38.076918  492447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:53:38.096482  492447 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924 for IP: 192.168.76.2
	I1227 20:53:38.096507  492447 certs.go:195] generating shared ca certs ...
	I1227 20:53:38.096524  492447 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:53:38.096681  492447 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:53:38.096721  492447 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:53:38.096728  492447 certs.go:257] generating profile certs ...
	I1227 20:53:38.096813  492447 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.key
	I1227 20:53:38.096884  492447 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/apiserver.key.eada78d3
	I1227 20:53:38.096924  492447 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/proxy-client.key
	I1227 20:53:38.097041  492447 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:53:38.097071  492447 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:53:38.097078  492447 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:53:38.097108  492447 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:53:38.097133  492447 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:53:38.097159  492447 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:53:38.097206  492447 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:53:38.097918  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:53:38.117874  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:53:38.134494  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:53:38.150587  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:53:38.166560  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:53:38.184261  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:53:38.201248  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:53:38.246292  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:53:38.290525  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:53:38.321897  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:53:38.340586  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:53:38.358940  492447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:53:38.371508  492447 ssh_runner.go:195] Run: openssl version
	I1227 20:53:38.377912  492447 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:53:38.385061  492447 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:53:38.392199  492447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:53:38.395687  492447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:53:38.395749  492447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:53:38.438132  492447 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:53:38.445218  492447 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:53:38.452915  492447 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:53:38.459932  492447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:53:38.463380  492447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:53:38.463435  492447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:53:38.503771  492447 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:53:38.510998  492447 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:53:38.518171  492447 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:53:38.525110  492447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:53:38.528751  492447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:53:38.528814  492447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:53:38.570048  492447 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:53:38.577153  492447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:53:38.580808  492447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:53:38.621369  492447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:53:38.666571  492447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:53:38.708300  492447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:53:38.749106  492447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:53:38.800806  492447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:53:38.896900  492447 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-058924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-058924 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:53:38.897044  492447 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:53:38.897143  492447 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:53:38.939121  492447 cri.go:96] found id: "59e2430d76d4f0fe1a76d346425874373d6f2997b74d9256e8d9c44383dd8e9c"
	I1227 20:53:38.939190  492447 cri.go:96] found id: "0284e6610331862ca822c0a398c4ebd7e2be94ee4e5d1cb1536b5202a533ba8b"
	I1227 20:53:38.939218  492447 cri.go:96] found id: ""
	I1227 20:53:38.939298  492447 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:53:38.960834  492447 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:53:38Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:53:38.960960  492447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:53:38.987860  492447 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:53:38.987919  492447 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:53:38.988011  492447 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:53:39.046199  492447 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:53:39.046686  492447 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-058924" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:53:39.046855  492447 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-272475/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-058924" cluster setting kubeconfig missing "default-k8s-diff-port-058924" context setting]
	I1227 20:53:39.047207  492447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:53:39.048410  492447 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:53:39.059456  492447 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 20:53:39.059536  492447 kubeadm.go:602] duration metric: took 71.587442ms to restartPrimaryControlPlane
	I1227 20:53:39.059560  492447 kubeadm.go:403] duration metric: took 162.682144ms to StartCluster
	I1227 20:53:39.059606  492447 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:53:39.059702  492447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:53:39.060398  492447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:53:39.060888  492447 config.go:182] Loaded profile config "default-k8s-diff-port-058924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:53:39.060960  492447 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:53:39.061020  492447 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:53:39.061107  492447 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-058924"
	I1227 20:53:39.061134  492447 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-058924"
	W1227 20:53:39.061154  492447 addons.go:248] addon storage-provisioner should already be in state true
	I1227 20:53:39.061208  492447 host.go:66] Checking if "default-k8s-diff-port-058924" exists ...
	I1227 20:53:39.061890  492447 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-058924 --format={{.State.Status}}
	I1227 20:53:39.062950  492447 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-058924"
	I1227 20:53:39.062967  492447 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-058924"
	W1227 20:53:39.062973  492447 addons.go:248] addon dashboard should already be in state true
	I1227 20:53:39.063002  492447 host.go:66] Checking if "default-k8s-diff-port-058924" exists ...
	I1227 20:53:39.063475  492447 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-058924 --format={{.State.Status}}
	I1227 20:53:39.063668  492447 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-058924"
	I1227 20:53:39.063690  492447 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-058924"
	I1227 20:53:39.063954  492447 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-058924 --format={{.State.Status}}
	I1227 20:53:39.074038  492447 out.go:179] * Verifying Kubernetes components...
	I1227 20:53:39.081664  492447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:53:39.116001  492447 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 20:53:39.116127  492447 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:53:39.118250  492447 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-058924"
	W1227 20:53:39.118268  492447 addons.go:248] addon default-storageclass should already be in state true
	I1227 20:53:39.118290  492447 host.go:66] Checking if "default-k8s-diff-port-058924" exists ...
	I1227 20:53:39.118723  492447 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-058924 --format={{.State.Status}}
	I1227 20:53:39.123314  492447 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:53:39.123333  492447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:53:39.123398  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:39.126129  492447 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 20:53:39.130466  492447 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 20:53:39.130490  492447 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 20:53:39.130587  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:39.152100  492447 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:53:39.152122  492447 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:53:39.152178  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:39.200443  492447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:53:39.210355  492447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:53:39.212491  492447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:53:39.410915  492447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:53:39.436321  492447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:53:39.459850  492447 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-058924" to be "Ready" ...
	I1227 20:53:39.478955  492447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:53:39.503626  492447 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 20:53:39.503709  492447 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 20:53:39.605257  492447 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:53:39.605331  492447 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:53:39.649962  492447 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:53:39.650033  492447 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:53:39.662698  492447 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:53:39.662769  492447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:53:39.675379  492447 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:53:39.675452  492447 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:53:39.694284  492447 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:53:39.694373  492447 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:53:39.728016  492447 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:53:39.728090  492447 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:53:39.759822  492447 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:53:39.759901  492447 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:53:39.795480  492447 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:53:39.795556  492447 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:53:39.810318  492447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:53:42.087873  492447 node_ready.go:49] node "default-k8s-diff-port-058924" is "Ready"
	I1227 20:53:42.087979  492447 node_ready.go:38] duration metric: took 2.628101767s for node "default-k8s-diff-port-058924" to be "Ready" ...
	I1227 20:53:42.088011  492447 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:53:42.088125  492447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:53:43.770195  492447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.333796393s)
	I1227 20:53:43.770249  492447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.291217852s)
	I1227 20:53:43.770524  492447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.960107831s)
	I1227 20:53:43.770730  492447 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.682570097s)
	I1227 20:53:43.770747  492447 api_server.go:72] duration metric: took 4.709750607s to wait for apiserver process to appear ...
	I1227 20:53:43.770754  492447 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:53:43.770781  492447 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1227 20:53:43.773778  492447 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-058924 addons enable metrics-server
	
	I1227 20:53:43.780350  492447 api_server.go:325] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1227 20:53:43.782612  492447 api_server.go:141] control plane version: v1.35.0
	I1227 20:53:43.782636  492447 api_server.go:131] duration metric: took 11.875667ms to wait for apiserver health ...
	I1227 20:53:43.782645  492447 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:53:43.786840  492447 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 20:53:43.787472  492447 system_pods.go:59] 8 kube-system pods found
	I1227 20:53:43.787557  492447 system_pods.go:61] "coredns-7d764666f9-7wf76" [14f9ecf5-c5b1-4458-bce4-18c5f12a447a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:53:43.787584  492447 system_pods.go:61] "etcd-default-k8s-diff-port-058924" [b6248c66-2ad3-43a3-ba5a-b5f3bd9219a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:53:43.787619  492447 system_pods.go:61] "kindnet-8clbx" [a53eca44-5c16-4e1c-b208-061922a489d6] Running
	I1227 20:53:43.787646  492447 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-058924" [2a305629-e147-40ce-9422-c1010a8bbbcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:53:43.787725  492447 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-058924" [b4da8504-1367-4584-8f91-0de40e6c3b81] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:53:43.787753  492447 system_pods.go:61] "kube-proxy-m2mtv" [b85165f6-d028-4fd5-92e8-e1b227aa2270] Running
	I1227 20:53:43.787794  492447 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-058924" [e1319201-ba51-43c7-b16d-efd9120f0e5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:53:43.787827  492447 system_pods.go:61] "storage-provisioner" [e205de3f-3506-425b-a039-4dfa897cf8f9] Running
	I1227 20:53:43.787870  492447 system_pods.go:74] duration metric: took 5.203988ms to wait for pod list to return data ...
	I1227 20:53:43.787902  492447 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:53:43.790039  492447 addons.go:530] duration metric: took 4.729016875s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 20:53:43.790577  492447 default_sa.go:45] found service account: "default"
	I1227 20:53:43.790596  492447 default_sa.go:55] duration metric: took 2.675781ms for default service account to be created ...
	I1227 20:53:43.790604  492447 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:53:43.793142  492447 system_pods.go:86] 8 kube-system pods found
	I1227 20:53:43.793170  492447 system_pods.go:89] "coredns-7d764666f9-7wf76" [14f9ecf5-c5b1-4458-bce4-18c5f12a447a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:53:43.793190  492447 system_pods.go:89] "etcd-default-k8s-diff-port-058924" [b6248c66-2ad3-43a3-ba5a-b5f3bd9219a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:53:43.793198  492447 system_pods.go:89] "kindnet-8clbx" [a53eca44-5c16-4e1c-b208-061922a489d6] Running
	I1227 20:53:43.793207  492447 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-058924" [2a305629-e147-40ce-9422-c1010a8bbbcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:53:43.793216  492447 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-058924" [b4da8504-1367-4584-8f91-0de40e6c3b81] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:53:43.793221  492447 system_pods.go:89] "kube-proxy-m2mtv" [b85165f6-d028-4fd5-92e8-e1b227aa2270] Running
	I1227 20:53:43.793229  492447 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-058924" [e1319201-ba51-43c7-b16d-efd9120f0e5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:53:43.793233  492447 system_pods.go:89] "storage-provisioner" [e205de3f-3506-425b-a039-4dfa897cf8f9] Running
	I1227 20:53:43.793241  492447 system_pods.go:126] duration metric: took 2.631613ms to wait for k8s-apps to be running ...
	I1227 20:53:43.793248  492447 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:53:43.793299  492447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:53:43.809748  492447 system_svc.go:56] duration metric: took 16.491364ms WaitForService to wait for kubelet
	I1227 20:53:43.809775  492447 kubeadm.go:587] duration metric: took 4.748776085s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:53:43.809793  492447 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:53:43.814705  492447 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:53:43.814753  492447 node_conditions.go:123] node cpu capacity is 2
	I1227 20:53:43.814768  492447 node_conditions.go:105] duration metric: took 4.968942ms to run NodePressure ...
	I1227 20:53:43.814785  492447 start.go:242] waiting for startup goroutines ...
	I1227 20:53:43.814793  492447 start.go:247] waiting for cluster config update ...
	I1227 20:53:43.814804  492447 start.go:256] writing updated cluster config ...
	I1227 20:53:43.815079  492447 ssh_runner.go:195] Run: rm -f paused
	I1227 20:53:43.818622  492447 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:53:43.887173  492447 pod_ready.go:83] waiting for pod "coredns-7d764666f9-7wf76" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 20:53:45.896737  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:53:48.392906  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:53:50.393717  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:53:52.893301  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:53:55.392767  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:53:57.393378  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:53:59.892227  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:54:01.893372  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:54:04.392090  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:54:06.392663  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:54:08.893521  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:54:11.393673  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:54:13.892233  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:54:15.893035  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:54:18.393163  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:54:20.892619  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:54:22.893235  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	I1227 20:54:24.892347  492447 pod_ready.go:94] pod "coredns-7d764666f9-7wf76" is "Ready"
	I1227 20:54:24.892376  492447 pod_ready.go:86] duration metric: took 41.0051772s for pod "coredns-7d764666f9-7wf76" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:24.894962  492447 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:24.898962  492447 pod_ready.go:94] pod "etcd-default-k8s-diff-port-058924" is "Ready"
	I1227 20:54:24.898992  492447 pod_ready.go:86] duration metric: took 4.001446ms for pod "etcd-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:24.901164  492447 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:24.905254  492447 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-058924" is "Ready"
	I1227 20:54:24.905282  492447 pod_ready.go:86] duration metric: took 4.092693ms for pod "kube-apiserver-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:24.907345  492447 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:25.090468  492447 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-058924" is "Ready"
	I1227 20:54:25.090498  492447 pod_ready.go:86] duration metric: took 183.127736ms for pod "kube-controller-manager-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:25.290434  492447 pod_ready.go:83] waiting for pod "kube-proxy-m2mtv" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:25.691193  492447 pod_ready.go:94] pod "kube-proxy-m2mtv" is "Ready"
	I1227 20:54:25.691222  492447 pod_ready.go:86] duration metric: took 400.758485ms for pod "kube-proxy-m2mtv" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:25.890102  492447 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:26.295125  492447 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-058924" is "Ready"
	I1227 20:54:26.295164  492447 pod_ready.go:86] duration metric: took 405.037792ms for pod "kube-scheduler-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:26.295177  492447 pod_ready.go:40] duration metric: took 42.476472247s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:54:26.349209  492447 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 20:54:26.352346  492447 out.go:203] 
	W1227 20:54:26.355413  492447 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 20:54:26.358320  492447 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:54:26.361302  492447 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-058924" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.090821615Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.093914984Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.094067382Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.094144467Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.097864871Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.098015086Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.098090506Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.101553317Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.10158507Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.101608708Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.105308863Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.105344858Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.330803372Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f85855b2-c068-4509-a94c-f280377766be name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.331690469Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=be005dd2-813b-4df0-bf16-4cb0d10d4acb name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.332587092Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf/dashboard-metrics-scraper" id=a5e7a954-6ce5-4e79-b979-761c93738116 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.332701624Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.339759903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.340276615Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.359370707Z" level=info msg="Created container 37c4318d86e79f02bbe58773a1a5fb7cfc6d4907577ccae135cbebe491459fd3: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf/dashboard-metrics-scraper" id=a5e7a954-6ce5-4e79-b979-761c93738116 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.362202849Z" level=info msg="Starting container: 37c4318d86e79f02bbe58773a1a5fb7cfc6d4907577ccae135cbebe491459fd3" id=a7eda10e-b72e-48f8-9a50-898de3788535 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:54:24 default-k8s-diff-port-058924 conmon[1708]: conmon 37c4318d86e79f02bbe5 <ninfo>: container 1711 exited with status 1
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.367978752Z" level=info msg="Started container" PID=1711 containerID=37c4318d86e79f02bbe58773a1a5fb7cfc6d4907577ccae135cbebe491459fd3 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf/dashboard-metrics-scraper id=a7eda10e-b72e-48f8-9a50-898de3788535 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a7946247412d2da16340f62a2c16da00ee3a6dacae7f9ec990290b1d10aa9a7
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.541610074Z" level=info msg="Removing container: 1f7f380f471fec7ea5567d77937a95a3f95bb124df7d8f75077be090e269c953" id=eb3d966c-996c-48d6-a393-7b314ce45ac3 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.548681932Z" level=info msg="Error loading conmon cgroup of container 1f7f380f471fec7ea5567d77937a95a3f95bb124df7d8f75077be090e269c953: cgroup deleted" id=eb3d966c-996c-48d6-a393-7b314ce45ac3 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.55419084Z" level=info msg="Removed container 1f7f380f471fec7ea5567d77937a95a3f95bb124df7d8f75077be090e269c953: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf/dashboard-metrics-scraper" id=eb3d966c-996c-48d6-a393-7b314ce45ac3 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	37c4318d86e79       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago       Exited              dashboard-metrics-scraper   3                   3a7946247412d       dashboard-metrics-scraper-867fb5f87b-ggllf             kubernetes-dashboard
	947ffc68c455a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   ffd616ce150da       storage-provisioner                                    kube-system
	cf047da0f34ba       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   46 seconds ago       Running             kubernetes-dashboard        0                   a9674efb7562d       kubernetes-dashboard-b84665fb8-l99xd                   kubernetes-dashboard
	ec52181844fe0       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           58 seconds ago       Running             coredns                     1                   940d4cd2f1a62       coredns-7d764666f9-7wf76                               kube-system
	3a0f52a8ae08b       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   58ce9167da7f8       busybox                                                default
	38227fcb6aa73       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   ffd616ce150da       storage-provisioner                                    kube-system
	4a06bde369d1e       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           58 seconds ago       Running             kindnet-cni                 1                   d49b2cc1986d0       kindnet-8clbx                                          kube-system
	84c8ca768685e       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           58 seconds ago       Running             kube-proxy                  1                   50d22c81e4c85       kube-proxy-m2mtv                                       kube-system
	3f3183a409a49       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           About a minute ago   Running             etcd                        1                   0e0ce44035982       etcd-default-k8s-diff-port-058924                      kube-system
	8fd1fa375b3be       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           About a minute ago   Running             kube-scheduler              1                   2918271f1f51b       kube-scheduler-default-k8s-diff-port-058924            kube-system
	59e2430d76d4f       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           About a minute ago   Running             kube-controller-manager     1                   29c08664457fb       kube-controller-manager-default-k8s-diff-port-058924   kube-system
	0284e66103318       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           About a minute ago   Running             kube-apiserver              1                   d83d7557592a6       kube-apiserver-default-k8s-diff-port-058924            kube-system
	
	
	==> coredns [ec52181844fe079b145958d700f1e1fe5dcf49cb2b9cc8579e21772009c71688] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:58807 - 33705 "HINFO IN 7918744130668000058.3578085914535680865. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013731508s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-058924
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-058924
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=default-k8s-diff-port-058924
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_52_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:52:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-058924
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:54:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:54:33 +0000   Sat, 27 Dec 2025 20:52:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:54:33 +0000   Sat, 27 Dec 2025 20:52:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:54:33 +0000   Sat, 27 Dec 2025 20:52:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:54:33 +0000   Sat, 27 Dec 2025 20:53:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-058924
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                c6cef3de-c29f-4e64-acd9-52f541b38c56
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-7d764666f9-7wf76                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     109s
	  kube-system                 etcd-default-k8s-diff-port-058924                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         115s
	  kube-system                 kindnet-8clbx                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-default-k8s-diff-port-058924             250m (12%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-058924    200m (10%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-m2mtv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-default-k8s-diff-port-058924             100m (5%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-ggllf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-l99xd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  111s  node-controller  Node default-k8s-diff-port-058924 event: Registered Node default-k8s-diff-port-058924 in Controller
	  Normal  RegisteredNode  56s   node-controller  Node default-k8s-diff-port-058924 event: Registered Node default-k8s-diff-port-058924 in Controller
	
	
	==> dmesg <==
	[ +36.244108] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 20:22] overlayfs: idmapped layers are currently not supported
	[Dec27 20:23] overlayfs: idmapped layers are currently not supported
	[Dec27 20:24] overlayfs: idmapped layers are currently not supported
	[Dec27 20:25] overlayfs: idmapped layers are currently not supported
	[ +35.447549] overlayfs: idmapped layers are currently not supported
	[Dec27 20:26] overlayfs: idmapped layers are currently not supported
	[Dec27 20:27] overlayfs: idmapped layers are currently not supported
	[  +6.770645] overlayfs: idmapped layers are currently not supported
	[Dec27 20:28] overlayfs: idmapped layers are currently not supported
	[ +25.872751] overlayfs: idmapped layers are currently not supported
	[Dec27 20:29] overlayfs: idmapped layers are currently not supported
	[ +32.997137] overlayfs: idmapped layers are currently not supported
	[Dec27 20:31] overlayfs: idmapped layers are currently not supported
	[Dec27 20:33] overlayfs: idmapped layers are currently not supported
	[ +33.772475] overlayfs: idmapped layers are currently not supported
	[Dec27 20:39] overlayfs: idmapped layers are currently not supported
	[Dec27 20:40] overlayfs: idmapped layers are currently not supported
	[Dec27 20:44] overlayfs: idmapped layers are currently not supported
	[Dec27 20:45] overlayfs: idmapped layers are currently not supported
	[Dec27 20:49] overlayfs: idmapped layers are currently not supported
	[Dec27 20:50] overlayfs: idmapped layers are currently not supported
	[Dec27 20:51] overlayfs: idmapped layers are currently not supported
	[Dec27 20:52] overlayfs: idmapped layers are currently not supported
	[Dec27 20:53] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3f3183a409a49620f674f6cfc989ab37a01f933a3e0df3233dd981b0090f1459] <==
	{"level":"info","ts":"2025-12-27T20:53:39.602411Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:53:39.602430Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:53:39.606364Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T20:53:39.606525Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T20:53:39.606625Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T20:53:39.614045Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T20:53:39.614157Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T20:53:39.895011Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:53:39.900853Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:53:39.900942Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:53:39.900962Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:53:39.900977Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:53:39.909505Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:53:39.909596Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:53:39.909640Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:53:39.910222Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:53:39.912395Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-diff-port-058924 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:53:39.912609Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:53:39.913777Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:53:39.921341Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:53:39.921678Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:53:39.921773Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:53:39.921961Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:53:39.926312Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:53:39.938028Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 20:54:41 up  2:37,  0 user,  load average: 0.85, 1.43, 1.73
	Linux default-k8s-diff-port-058924 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4a06bde369d1ed3ed4adcb37795d14e35cc6d700346f1d8735f87fcc17f5acbd] <==
	I1227 20:53:42.845156       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:53:42.845513       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 20:53:42.845705       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:53:42.845718       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:53:42.845726       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:53:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:53:43.130011       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:53:43.130039       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:53:43.130048       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:53:43.130430       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 20:54:13.080555       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1227 20:54:13.131112       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 20:54:13.131113       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 20:54:13.131207       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1227 20:54:14.730324       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:54:14.730358       1 metrics.go:72] Registering metrics
	I1227 20:54:14.730440       1 controller.go:711] "Syncing nftables rules"
	I1227 20:54:23.087017       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:54:23.087071       1 main.go:301] handling current node
	I1227 20:54:33.083166       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:54:33.083221       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0284e6610331862ca822c0a398c4ebd7e2be94ee4e5d1cb1536b5202a533ba8b] <==
	I1227 20:53:42.111734       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:42.155143       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 20:53:42.173679       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:53:42.179123       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 20:53:42.179158       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 20:53:42.181503       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 20:53:42.181701       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:53:42.183888       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:53:42.185956       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 20:53:42.194381       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:53:42.206851       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:42.206890       1 policy_source.go:248] refreshing policies
	I1227 20:53:42.212849       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 20:53:42.269125       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:53:42.338836       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:53:42.923996       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:53:43.434774       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:53:43.560718       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:53:43.620990       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:53:43.637993       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:53:43.727514       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.161.31"}
	I1227 20:53:43.760078       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.250.64"}
	I1227 20:53:45.456574       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:53:45.655647       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:53:45.794252       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [59e2430d76d4f0fe1a76d346425874373d6f2997b74d9256e8d9c44383dd8e9c] <==
	I1227 20:53:45.210603       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199217       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.195104       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199279       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199287       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199296       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199268       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199347       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199355       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199363       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199369       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199375       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199381       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199390       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.234052       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 20:53:45.234190       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-058924"
	I1227 20:53:45.234290       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 20:53:45.199338       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199397       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199423       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199431       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.269671       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.284597       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.284819       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:53:45.284835       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [84c8ca768685ecc278641733fdb714032c8f936c2c0791b5cd5d1aa930606977] <==
	I1227 20:53:42.899423       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:53:43.159225       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:53:43.361576       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:43.361623       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 20:53:43.361736       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:53:43.570937       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:53:43.571089       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:53:43.589087       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:53:43.589595       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:53:43.589625       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:53:43.591372       1 config.go:200] "Starting service config controller"
	I1227 20:53:43.591393       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:53:43.597639       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:53:43.597719       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:53:43.597765       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:53:43.597793       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:53:43.598328       1 config.go:309] "Starting node config controller"
	I1227 20:53:43.598431       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:53:43.598463       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:53:43.693410       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:53:43.701236       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 20:53:43.701277       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8fd1fa375b3beeeb401d83d95cfb1937027fe068753413aea7fd26b9455cd5a0] <==
	I1227 20:53:39.906248       1 serving.go:386] Generated self-signed cert in-memory
	I1227 20:53:42.209339       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 20:53:42.209383       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:53:42.222948       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 20:53:42.222978       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1227 20:53:42.223016       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:53:42.223056       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:53:42.223076       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:53:42.223103       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 20:53:42.223133       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1227 20:53:42.223142       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:53:42.324015       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:42.324469       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:42.325367       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:53:58 default-k8s-diff-port-058924 kubelet[785]: I1227 20:53:58.466375     785 scope.go:122] "RemoveContainer" containerID="f90cdefc113bcc1729c3dd72de47cfa9650d0f81da1efbb86d4429b7e6b9a684"
	Dec 27 20:53:58 default-k8s-diff-port-058924 kubelet[785]: E1227 20:53:58.466529     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ggllf_kubernetes-dashboard(ef835eaf-9888-41cc-b525-faa9d4b2b6a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" podUID="ef835eaf-9888-41cc-b525-faa9d4b2b6a6"
	Dec 27 20:54:01 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:01.328144     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" containerName="dashboard-metrics-scraper"
	Dec 27 20:54:01 default-k8s-diff-port-058924 kubelet[785]: I1227 20:54:01.328190     785 scope.go:122] "RemoveContainer" containerID="f90cdefc113bcc1729c3dd72de47cfa9650d0f81da1efbb86d4429b7e6b9a684"
	Dec 27 20:54:01 default-k8s-diff-port-058924 kubelet[785]: I1227 20:54:01.479768     785 scope.go:122] "RemoveContainer" containerID="f90cdefc113bcc1729c3dd72de47cfa9650d0f81da1efbb86d4429b7e6b9a684"
	Dec 27 20:54:01 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:01.480147     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" containerName="dashboard-metrics-scraper"
	Dec 27 20:54:01 default-k8s-diff-port-058924 kubelet[785]: I1227 20:54:01.480187     785 scope.go:122] "RemoveContainer" containerID="1f7f380f471fec7ea5567d77937a95a3f95bb124df7d8f75077be090e269c953"
	Dec 27 20:54:01 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:01.480367     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ggllf_kubernetes-dashboard(ef835eaf-9888-41cc-b525-faa9d4b2b6a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" podUID="ef835eaf-9888-41cc-b525-faa9d4b2b6a6"
	Dec 27 20:54:08 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:08.466212     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" containerName="dashboard-metrics-scraper"
	Dec 27 20:54:08 default-k8s-diff-port-058924 kubelet[785]: I1227 20:54:08.466719     785 scope.go:122] "RemoveContainer" containerID="1f7f380f471fec7ea5567d77937a95a3f95bb124df7d8f75077be090e269c953"
	Dec 27 20:54:08 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:08.466957     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ggllf_kubernetes-dashboard(ef835eaf-9888-41cc-b525-faa9d4b2b6a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" podUID="ef835eaf-9888-41cc-b525-faa9d4b2b6a6"
	Dec 27 20:54:13 default-k8s-diff-port-058924 kubelet[785]: I1227 20:54:13.512081     785 scope.go:122] "RemoveContainer" containerID="38227fcb6aa735a8d71a2d3a9cbddcc31e2b4e9f35fd1f6705090de387c7487f"
	Dec 27 20:54:24 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:24.330185     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" containerName="dashboard-metrics-scraper"
	Dec 27 20:54:24 default-k8s-diff-port-058924 kubelet[785]: I1227 20:54:24.330224     785 scope.go:122] "RemoveContainer" containerID="1f7f380f471fec7ea5567d77937a95a3f95bb124df7d8f75077be090e269c953"
	Dec 27 20:54:24 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:24.414095     785 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-7wf76" containerName="coredns"
	Dec 27 20:54:24 default-k8s-diff-port-058924 kubelet[785]: I1227 20:54:24.540171     785 scope.go:122] "RemoveContainer" containerID="1f7f380f471fec7ea5567d77937a95a3f95bb124df7d8f75077be090e269c953"
	Dec 27 20:54:25 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:25.544604     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" containerName="dashboard-metrics-scraper"
	Dec 27 20:54:25 default-k8s-diff-port-058924 kubelet[785]: I1227 20:54:25.545046     785 scope.go:122] "RemoveContainer" containerID="37c4318d86e79f02bbe58773a1a5fb7cfc6d4907577ccae135cbebe491459fd3"
	Dec 27 20:54:25 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:25.545228     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ggllf_kubernetes-dashboard(ef835eaf-9888-41cc-b525-faa9d4b2b6a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" podUID="ef835eaf-9888-41cc-b525-faa9d4b2b6a6"
	Dec 27 20:54:28 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:28.466609     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" containerName="dashboard-metrics-scraper"
	Dec 27 20:54:28 default-k8s-diff-port-058924 kubelet[785]: I1227 20:54:28.466660     785 scope.go:122] "RemoveContainer" containerID="37c4318d86e79f02bbe58773a1a5fb7cfc6d4907577ccae135cbebe491459fd3"
	Dec 27 20:54:28 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:28.466812     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ggllf_kubernetes-dashboard(ef835eaf-9888-41cc-b525-faa9d4b2b6a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" podUID="ef835eaf-9888-41cc-b525-faa9d4b2b6a6"
	Dec 27 20:54:38 default-k8s-diff-port-058924 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:54:38 default-k8s-diff-port-058924 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:54:38 default-k8s-diff-port-058924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [cf047da0f34ba382065ad57a1c14715622f4e810d1c6aa2686329d2f26c64f9d] <==
	2025/12/27 20:53:54 Using namespace: kubernetes-dashboard
	2025/12/27 20:53:54 Using in-cluster config to connect to apiserver
	2025/12/27 20:53:54 Using secret token for csrf signing
	2025/12/27 20:53:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 20:53:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 20:53:54 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 20:53:54 Generating JWE encryption key
	2025/12/27 20:53:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 20:53:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 20:53:54 Initializing JWE encryption key from synchronized object
	2025/12/27 20:53:54 Creating in-cluster Sidecar client
	2025/12/27 20:53:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:53:54 Serving insecurely on HTTP port: 9090
	2025/12/27 20:54:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:53:54 Starting overwatch
	
	
	==> storage-provisioner [38227fcb6aa735a8d71a2d3a9cbddcc31e2b4e9f35fd1f6705090de387c7487f] <==
	I1227 20:53:43.048495       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 20:54:13.051083       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [947ffc68c455a66d413ac0e1a88286d4999d302eb9862e8dbaa8bdca4e9962f5] <==
	I1227 20:54:13.570929       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:54:13.570975       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 20:54:13.573181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:17.028385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:21.288540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:24.886864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:27.940028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:30.962509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:30.967100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:54:30.967242       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:54:30.967406       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-058924_18d4d391-fbc4-4c96-b275-e2e9fe0afaf3!
	I1227 20:54:30.969190       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"88bc2d5f-416e-4b56-8bda-cffc96e439d9", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-058924_18d4d391-fbc4-4c96-b275-e2e9fe0afaf3 became leader
	W1227 20:54:30.972940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:30.977682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:54:31.068194       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-058924_18d4d391-fbc4-4c96-b275-e2e9fe0afaf3!
	W1227 20:54:32.980690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:32.985721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:34.988842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:34.995557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:36.998812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:37.003644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:39.007211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:39.023078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:41.026081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:41.031909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-058924 -n default-k8s-diff-port-058924
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-058924 -n default-k8s-diff-port-058924: exit status 2 (490.475901ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-058924 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-058924
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-058924:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436",
	        "Created": "2025-12-27T20:52:26.32228828Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 492584,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:53:31.469313347Z",
	            "FinishedAt": "2025-12-27T20:53:30.672195745Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436/hostname",
	        "HostsPath": "/var/lib/docker/containers/14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436/hosts",
	        "LogPath": "/var/lib/docker/containers/14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436/14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436-json.log",
	        "Name": "/default-k8s-diff-port-058924",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-058924:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-058924",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "14a8831f1ae279bdee9cb950b754d19cb55a9a96bb1c6a18f3fb90e8bfce9436",
	                "LowerDir": "/var/lib/docker/overlay2/1705b32d6f7b3acd21037f84bc864fcd3368266ae22d9d1ff6c6114e626d27cd-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1705b32d6f7b3acd21037f84bc864fcd3368266ae22d9d1ff6c6114e626d27cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1705b32d6f7b3acd21037f84bc864fcd3368266ae22d9d1ff6c6114e626d27cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1705b32d6f7b3acd21037f84bc864fcd3368266ae22d9d1ff6c6114e626d27cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-058924",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-058924/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-058924",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-058924",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-058924",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "70000734e7bf52e061d1ee9fdded19dbf00c4137d4d68834643c5c391f0fcc64",
	            "SandboxKey": "/var/run/docker/netns/70000734e7bf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-058924": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:e8:d7:40:0a:cf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4cf559b41345f8593676aae308d8407a6052ba110f51cbc56967a3187eac038b",
	                    "EndpointID": "99bbf9f671ba7ac53f1e6e70ad418d8a09f033712a396d3406973081bd2355fc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-058924",
	                        "14a8831f1ae2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-058924 -n default-k8s-diff-port-058924
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-058924 -n default-k8s-diff-port-058924: exit status 2 (335.480465ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-058924 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-058924 logs -n 25: (1.236615887s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-629954 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-629954       │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │ 27 Dec 25 20:45 UTC │
	│ start   │ -p cert-expiration-629954 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-629954       │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │ 27 Dec 25 20:48 UTC │
	│ delete  │ -p cert-expiration-629954                                                                                                                                                                                                                     │ cert-expiration-629954       │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │ 27 Dec 25 20:48 UTC │
	│ start   │ -p force-systemd-flag-604544 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-604544    │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │                     │
	│ delete  │ -p force-systemd-env-859716                                                                                                                                                                                                                   │ force-systemd-env-859716     │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ start   │ -p cert-options-765175 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-765175          │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ ssh     │ cert-options-765175 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-765175          │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ ssh     │ -p cert-options-765175 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-765175          │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ delete  │ -p cert-options-765175                                                                                                                                                                                                                        │ cert-options-765175          │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ start   │ -p old-k8s-version-855707 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-855707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │                     │
	│ stop    │ -p old-k8s-version-855707 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │ 27 Dec 25 20:51 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-855707 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:51 UTC │ 27 Dec 25 20:51 UTC │
	│ start   │ -p old-k8s-version-855707 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:51 UTC │ 27 Dec 25 20:52 UTC │
	│ image   │ old-k8s-version-855707 image list --format=json                                                                                                                                                                                               │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ pause   │ -p old-k8s-version-855707 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │                     │
	│ delete  │ -p old-k8s-version-855707                                                                                                                                                                                                                     │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ delete  │ -p old-k8s-version-855707                                                                                                                                                                                                                     │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ start   │ -p default-k8s-diff-port-058924 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:53 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-058924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-058924 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-058924 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
	│ start   │ -p default-k8s-diff-port-058924 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:54 UTC │
	│ image   │ default-k8s-diff-port-058924 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ pause   │ -p default-k8s-diff-port-058924 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:53:31
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:53:31.185838  492447 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:53:31.186024  492447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:53:31.186055  492447 out.go:374] Setting ErrFile to fd 2...
	I1227 20:53:31.186078  492447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:53:31.186442  492447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:53:31.186920  492447 out.go:368] Setting JSON to false
	I1227 20:53:31.187822  492447 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9364,"bootTime":1766859448,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:53:31.187943  492447 start.go:143] virtualization:  
	I1227 20:53:31.190883  492447 out.go:179] * [default-k8s-diff-port-058924] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:53:31.193069  492447 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:53:31.193175  492447 notify.go:221] Checking for updates...
	I1227 20:53:31.198706  492447 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:53:31.201472  492447 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:53:31.204289  492447 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:53:31.207278  492447 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:53:31.210291  492447 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:53:31.213776  492447 config.go:182] Loaded profile config "default-k8s-diff-port-058924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:53:31.214342  492447 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:53:31.250996  492447 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:53:31.251102  492447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:53:31.311097  492447 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:53:31.302138053 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:53:31.311207  492447 docker.go:319] overlay module found
	I1227 20:53:31.314411  492447 out.go:179] * Using the docker driver based on existing profile
	I1227 20:53:31.317256  492447 start.go:309] selected driver: docker
	I1227 20:53:31.317276  492447 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-058924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-058924 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:53:31.317395  492447 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:53:31.318103  492447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:53:31.378936  492447 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:53:31.369439673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:53:31.379272  492447 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:53:31.379307  492447 cni.go:84] Creating CNI manager for ""
	I1227 20:53:31.379363  492447 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:53:31.379403  492447 start.go:353] cluster config:
	{Name:default-k8s-diff-port-058924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-058924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:53:31.384358  492447 out.go:179] * Starting "default-k8s-diff-port-058924" primary control-plane node in "default-k8s-diff-port-058924" cluster
	I1227 20:53:31.387085  492447 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:53:31.389893  492447 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:53:31.392527  492447 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:53:31.392569  492447 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:53:31.392583  492447 cache.go:65] Caching tarball of preloaded images
	I1227 20:53:31.392618  492447 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:53:31.392667  492447 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:53:31.392677  492447 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:53:31.392789  492447 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/config.json ...
	I1227 20:53:31.411530  492447 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:53:31.411555  492447 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:53:31.411570  492447 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:53:31.411598  492447 start.go:360] acquireMachinesLock for default-k8s-diff-port-058924: {Name:mk1f359d7e6bf82a20b5c0ba5278536cffac40ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:53:31.411659  492447 start.go:364] duration metric: took 36.816µs to acquireMachinesLock for "default-k8s-diff-port-058924"
	I1227 20:53:31.411683  492447 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:53:31.411693  492447 fix.go:54] fixHost starting: 
	I1227 20:53:31.411960  492447 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-058924 --format={{.State.Status}}
	I1227 20:53:31.433060  492447 fix.go:112] recreateIfNeeded on default-k8s-diff-port-058924: state=Stopped err=<nil>
	W1227 20:53:31.433090  492447 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:53:31.436212  492447 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-058924" ...
	I1227 20:53:31.436281  492447 cli_runner.go:164] Run: docker start default-k8s-diff-port-058924
	I1227 20:53:31.700299  492447 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-058924 --format={{.State.Status}}
	I1227 20:53:31.722974  492447 kic.go:430] container "default-k8s-diff-port-058924" state is running.
	I1227 20:53:31.723610  492447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-058924
	I1227 20:53:31.744165  492447 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/config.json ...
	I1227 20:53:31.744386  492447 machine.go:94] provisionDockerMachine start ...
	I1227 20:53:31.744451  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:31.767933  492447 main.go:144] libmachine: Using SSH client type: native
	I1227 20:53:31.768262  492447 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1227 20:53:31.768271  492447 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:53:31.768864  492447 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46624->127.0.0.1:33423: read: connection reset by peer
	I1227 20:53:34.908806  492447 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-058924
	
	I1227 20:53:34.908830  492447 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-058924"
	I1227 20:53:34.908894  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:34.926356  492447 main.go:144] libmachine: Using SSH client type: native
	I1227 20:53:34.926679  492447 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1227 20:53:34.926696  492447 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-058924 && echo "default-k8s-diff-port-058924" | sudo tee /etc/hostname
	I1227 20:53:35.075133  492447 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-058924
	
	I1227 20:53:35.075226  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:35.093219  492447 main.go:144] libmachine: Using SSH client type: native
	I1227 20:53:35.093566  492447 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1227 20:53:35.093586  492447 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-058924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-058924/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-058924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:53:35.239741  492447 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:53:35.239812  492447 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:53:35.239849  492447 ubuntu.go:190] setting up certificates
	I1227 20:53:35.239872  492447 provision.go:84] configureAuth start
	I1227 20:53:35.239947  492447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-058924
	I1227 20:53:35.270500  492447 provision.go:143] copyHostCerts
	I1227 20:53:35.270576  492447 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:53:35.270598  492447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:53:35.270679  492447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:53:35.270773  492447 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:53:35.270778  492447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:53:35.270804  492447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:53:35.270855  492447 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:53:35.270860  492447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:53:35.270882  492447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:53:35.270930  492447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-058924 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-058924 localhost minikube]
	I1227 20:53:35.581301  492447 provision.go:177] copyRemoteCerts
	I1227 20:53:35.581382  492447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:53:35.581423  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:35.601388  492447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:53:35.702982  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1227 20:53:35.721134  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:53:35.739181  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:53:35.756049  492447 provision.go:87] duration metric: took 516.142273ms to configureAuth
	I1227 20:53:35.756080  492447 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:53:35.756276  492447 config.go:182] Loaded profile config "default-k8s-diff-port-058924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:53:35.756380  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:35.774363  492447 main.go:144] libmachine: Using SSH client type: native
	I1227 20:53:35.774687  492447 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1227 20:53:35.774708  492447 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:53:36.141795  492447 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:53:36.141818  492447 machine.go:97] duration metric: took 4.397423053s to provisionDockerMachine
	I1227 20:53:36.141830  492447 start.go:293] postStartSetup for "default-k8s-diff-port-058924" (driver="docker")
	I1227 20:53:36.141841  492447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:53:36.141905  492447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:53:36.141943  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:36.163164  492447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:53:36.261051  492447 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:53:36.264376  492447 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:53:36.264406  492447 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:53:36.264435  492447 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:53:36.264495  492447 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:53:36.264628  492447 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:53:36.264741  492447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:53:36.272125  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:53:36.289248  492447 start.go:296] duration metric: took 147.401066ms for postStartSetup
	I1227 20:53:36.289325  492447 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:53:36.289372  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:36.306092  492447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:53:36.402304  492447 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:53:36.406910  492447 fix.go:56] duration metric: took 4.99521019s for fixHost
	I1227 20:53:36.406938  492447 start.go:83] releasing machines lock for "default-k8s-diff-port-058924", held for 4.995265983s
	I1227 20:53:36.407010  492447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-058924
	I1227 20:53:36.424018  492447 ssh_runner.go:195] Run: cat /version.json
	I1227 20:53:36.424069  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:36.424337  492447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:53:36.424387  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:36.446667  492447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:53:36.446699  492447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:53:36.629286  492447 ssh_runner.go:195] Run: systemctl --version
	I1227 20:53:36.635571  492447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:53:36.670656  492447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:53:36.674910  492447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:53:36.674980  492447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:53:36.683005  492447 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:53:36.683031  492447 start.go:496] detecting cgroup driver to use...
	I1227 20:53:36.683063  492447 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:53:36.683116  492447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:53:36.698014  492447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:53:36.711706  492447 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:53:36.711819  492447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:53:36.730519  492447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:53:36.744526  492447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:53:36.861739  492447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:53:36.975727  492447 docker.go:234] disabling docker service ...
	I1227 20:53:36.975860  492447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:53:36.989905  492447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:53:37.002617  492447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:53:37.132847  492447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:53:37.244838  492447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:53:37.259473  492447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:53:37.273282  492447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:53:37.273343  492447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:53:37.282848  492447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:53:37.282910  492447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:53:37.291299  492447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:53:37.299854  492447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:53:37.308145  492447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:53:37.316225  492447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:53:37.324466  492447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:53:37.332338  492447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:53:37.340733  492447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:53:37.348394  492447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:53:37.355687  492447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:53:37.474678  492447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:53:37.657184  492447 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:53:37.657255  492447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:53:37.661099  492447 start.go:574] Will wait 60s for crictl version
	I1227 20:53:37.661159  492447 ssh_runner.go:195] Run: which crictl
	I1227 20:53:37.664554  492447 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:53:37.688108  492447 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:53:37.688198  492447 ssh_runner.go:195] Run: crio --version
	I1227 20:53:37.715843  492447 ssh_runner.go:195] Run: crio --version
	I1227 20:53:37.746794  492447 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:53:37.749650  492447 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-058924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:53:37.765756  492447 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 20:53:37.769765  492447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:53:37.779922  492447 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-058924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-058924 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:53:37.780049  492447 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:53:37.780105  492447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:53:37.814232  492447 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:53:37.814253  492447 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:53:37.814305  492447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:53:37.838484  492447 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:53:37.838509  492447 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:53:37.838517  492447 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.35.0 crio true true} ...
	I1227 20:53:37.838628  492447 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-058924 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-058924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:53:37.838726  492447 ssh_runner.go:195] Run: crio config
	I1227 20:53:37.892957  492447 cni.go:84] Creating CNI manager for ""
	I1227 20:53:37.892977  492447 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:53:37.892991  492447 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:53:37.893021  492447 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-058924 NodeName:default-k8s-diff-port-058924 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:53:37.893154  492447 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-058924"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:53:37.893224  492447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:53:37.900897  492447 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:53:37.900971  492447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:53:37.908125  492447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1227 20:53:37.920217  492447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:53:37.931973  492447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1227 20:53:37.944636  492447 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:53:37.948118  492447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:53:37.957313  492447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:53:38.076918  492447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:53:38.096482  492447 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924 for IP: 192.168.76.2
	I1227 20:53:38.096507  492447 certs.go:195] generating shared ca certs ...
	I1227 20:53:38.096524  492447 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:53:38.096681  492447 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:53:38.096721  492447 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:53:38.096728  492447 certs.go:257] generating profile certs ...
	I1227 20:53:38.096813  492447 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.key
	I1227 20:53:38.096884  492447 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/apiserver.key.eada78d3
	I1227 20:53:38.096924  492447 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/proxy-client.key
	I1227 20:53:38.097041  492447 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:53:38.097071  492447 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:53:38.097078  492447 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:53:38.097108  492447 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:53:38.097133  492447 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:53:38.097159  492447 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:53:38.097206  492447 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:53:38.097918  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:53:38.117874  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:53:38.134494  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:53:38.150587  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:53:38.166560  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 20:53:38.184261  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:53:38.201248  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:53:38.246292  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:53:38.290525  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:53:38.321897  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:53:38.340586  492447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:53:38.358940  492447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:53:38.371508  492447 ssh_runner.go:195] Run: openssl version
	I1227 20:53:38.377912  492447 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:53:38.385061  492447 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:53:38.392199  492447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:53:38.395687  492447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:53:38.395749  492447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:53:38.438132  492447 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:53:38.445218  492447 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:53:38.452915  492447 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:53:38.459932  492447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:53:38.463380  492447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:53:38.463435  492447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:53:38.503771  492447 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:53:38.510998  492447 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:53:38.518171  492447 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:53:38.525110  492447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:53:38.528751  492447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:53:38.528814  492447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:53:38.570048  492447 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:53:38.577153  492447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:53:38.580808  492447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:53:38.621369  492447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:53:38.666571  492447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:53:38.708300  492447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:53:38.749106  492447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:53:38.800806  492447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:53:38.896900  492447 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-058924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-058924 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:53:38.897044  492447 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:53:38.897143  492447 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:53:38.939121  492447 cri.go:96] found id: "59e2430d76d4f0fe1a76d346425874373d6f2997b74d9256e8d9c44383dd8e9c"
	I1227 20:53:38.939190  492447 cri.go:96] found id: "0284e6610331862ca822c0a398c4ebd7e2be94ee4e5d1cb1536b5202a533ba8b"
	I1227 20:53:38.939218  492447 cri.go:96] found id: ""
	I1227 20:53:38.939298  492447 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:53:38.960834  492447 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:53:38Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:53:38.960960  492447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:53:38.987860  492447 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:53:38.987919  492447 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:53:38.988011  492447 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:53:39.046199  492447 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:53:39.046686  492447 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-058924" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:53:39.046855  492447 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-272475/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-058924" cluster setting kubeconfig missing "default-k8s-diff-port-058924" context setting]
	I1227 20:53:39.047207  492447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:53:39.048410  492447 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:53:39.059456  492447 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 20:53:39.059536  492447 kubeadm.go:602] duration metric: took 71.587442ms to restartPrimaryControlPlane
	I1227 20:53:39.059560  492447 kubeadm.go:403] duration metric: took 162.682144ms to StartCluster
	I1227 20:53:39.059606  492447 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:53:39.059702  492447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:53:39.060398  492447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:53:39.060888  492447 config.go:182] Loaded profile config "default-k8s-diff-port-058924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:53:39.060960  492447 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:53:39.061020  492447 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:53:39.061107  492447 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-058924"
	I1227 20:53:39.061134  492447 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-058924"
	W1227 20:53:39.061154  492447 addons.go:248] addon storage-provisioner should already be in state true
	I1227 20:53:39.061208  492447 host.go:66] Checking if "default-k8s-diff-port-058924" exists ...
	I1227 20:53:39.061890  492447 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-058924 --format={{.State.Status}}
	I1227 20:53:39.062950  492447 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-058924"
	I1227 20:53:39.062967  492447 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-058924"
	W1227 20:53:39.062973  492447 addons.go:248] addon dashboard should already be in state true
	I1227 20:53:39.063002  492447 host.go:66] Checking if "default-k8s-diff-port-058924" exists ...
	I1227 20:53:39.063475  492447 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-058924 --format={{.State.Status}}
	I1227 20:53:39.063668  492447 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-058924"
	I1227 20:53:39.063690  492447 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-058924"
	I1227 20:53:39.063954  492447 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-058924 --format={{.State.Status}}
	I1227 20:53:39.074038  492447 out.go:179] * Verifying Kubernetes components...
	I1227 20:53:39.081664  492447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:53:39.116001  492447 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 20:53:39.116127  492447 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:53:39.118250  492447 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-058924"
	W1227 20:53:39.118268  492447 addons.go:248] addon default-storageclass should already be in state true
	I1227 20:53:39.118290  492447 host.go:66] Checking if "default-k8s-diff-port-058924" exists ...
	I1227 20:53:39.118723  492447 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-058924 --format={{.State.Status}}
	I1227 20:53:39.123314  492447 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:53:39.123333  492447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:53:39.123398  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:39.126129  492447 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 20:53:39.130466  492447 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 20:53:39.130490  492447 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 20:53:39.130587  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:39.152100  492447 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:53:39.152122  492447 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:53:39.152178  492447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-058924
	I1227 20:53:39.200443  492447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:53:39.210355  492447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:53:39.212491  492447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/default-k8s-diff-port-058924/id_rsa Username:docker}
	I1227 20:53:39.410915  492447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:53:39.436321  492447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:53:39.459850  492447 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-058924" to be "Ready" ...
	I1227 20:53:39.478955  492447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:53:39.503626  492447 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 20:53:39.503709  492447 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 20:53:39.605257  492447 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:53:39.605331  492447 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:53:39.649962  492447 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:53:39.650033  492447 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:53:39.662698  492447 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:53:39.662769  492447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:53:39.675379  492447 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:53:39.675452  492447 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:53:39.694284  492447 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:53:39.694373  492447 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:53:39.728016  492447 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:53:39.728090  492447 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:53:39.759822  492447 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:53:39.759901  492447 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:53:39.795480  492447 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:53:39.795556  492447 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:53:39.810318  492447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:53:42.087873  492447 node_ready.go:49] node "default-k8s-diff-port-058924" is "Ready"
	I1227 20:53:42.087979  492447 node_ready.go:38] duration metric: took 2.628101767s for node "default-k8s-diff-port-058924" to be "Ready" ...
	I1227 20:53:42.088011  492447 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:53:42.088125  492447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:53:43.770195  492447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.333796393s)
	I1227 20:53:43.770249  492447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.291217852s)
	I1227 20:53:43.770524  492447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.960107831s)
	I1227 20:53:43.770730  492447 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.682570097s)
	I1227 20:53:43.770747  492447 api_server.go:72] duration metric: took 4.709750607s to wait for apiserver process to appear ...
	I1227 20:53:43.770754  492447 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:53:43.770781  492447 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1227 20:53:43.773778  492447 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-058924 addons enable metrics-server
	
	I1227 20:53:43.780350  492447 api_server.go:325] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1227 20:53:43.782612  492447 api_server.go:141] control plane version: v1.35.0
	I1227 20:53:43.782636  492447 api_server.go:131] duration metric: took 11.875667ms to wait for apiserver health ...
	I1227 20:53:43.782645  492447 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:53:43.786840  492447 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 20:53:43.787472  492447 system_pods.go:59] 8 kube-system pods found
	I1227 20:53:43.787557  492447 system_pods.go:61] "coredns-7d764666f9-7wf76" [14f9ecf5-c5b1-4458-bce4-18c5f12a447a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:53:43.787584  492447 system_pods.go:61] "etcd-default-k8s-diff-port-058924" [b6248c66-2ad3-43a3-ba5a-b5f3bd9219a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:53:43.787619  492447 system_pods.go:61] "kindnet-8clbx" [a53eca44-5c16-4e1c-b208-061922a489d6] Running
	I1227 20:53:43.787646  492447 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-058924" [2a305629-e147-40ce-9422-c1010a8bbbcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:53:43.787725  492447 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-058924" [b4da8504-1367-4584-8f91-0de40e6c3b81] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:53:43.787753  492447 system_pods.go:61] "kube-proxy-m2mtv" [b85165f6-d028-4fd5-92e8-e1b227aa2270] Running
	I1227 20:53:43.787794  492447 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-058924" [e1319201-ba51-43c7-b16d-efd9120f0e5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:53:43.787827  492447 system_pods.go:61] "storage-provisioner" [e205de3f-3506-425b-a039-4dfa897cf8f9] Running
	I1227 20:53:43.787870  492447 system_pods.go:74] duration metric: took 5.203988ms to wait for pod list to return data ...
	I1227 20:53:43.787902  492447 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:53:43.790039  492447 addons.go:530] duration metric: took 4.729016875s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 20:53:43.790577  492447 default_sa.go:45] found service account: "default"
	I1227 20:53:43.790596  492447 default_sa.go:55] duration metric: took 2.675781ms for default service account to be created ...
	I1227 20:53:43.790604  492447 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:53:43.793142  492447 system_pods.go:86] 8 kube-system pods found
	I1227 20:53:43.793170  492447 system_pods.go:89] "coredns-7d764666f9-7wf76" [14f9ecf5-c5b1-4458-bce4-18c5f12a447a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:53:43.793190  492447 system_pods.go:89] "etcd-default-k8s-diff-port-058924" [b6248c66-2ad3-43a3-ba5a-b5f3bd9219a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:53:43.793198  492447 system_pods.go:89] "kindnet-8clbx" [a53eca44-5c16-4e1c-b208-061922a489d6] Running
	I1227 20:53:43.793207  492447 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-058924" [2a305629-e147-40ce-9422-c1010a8bbbcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:53:43.793216  492447 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-058924" [b4da8504-1367-4584-8f91-0de40e6c3b81] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:53:43.793221  492447 system_pods.go:89] "kube-proxy-m2mtv" [b85165f6-d028-4fd5-92e8-e1b227aa2270] Running
	I1227 20:53:43.793229  492447 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-058924" [e1319201-ba51-43c7-b16d-efd9120f0e5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:53:43.793233  492447 system_pods.go:89] "storage-provisioner" [e205de3f-3506-425b-a039-4dfa897cf8f9] Running
	I1227 20:53:43.793241  492447 system_pods.go:126] duration metric: took 2.631613ms to wait for k8s-apps to be running ...
	I1227 20:53:43.793248  492447 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:53:43.793299  492447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:53:43.809748  492447 system_svc.go:56] duration metric: took 16.491364ms WaitForService to wait for kubelet
	I1227 20:53:43.809775  492447 kubeadm.go:587] duration metric: took 4.748776085s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:53:43.809793  492447 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:53:43.814705  492447 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:53:43.814753  492447 node_conditions.go:123] node cpu capacity is 2
	I1227 20:53:43.814768  492447 node_conditions.go:105] duration metric: took 4.968942ms to run NodePressure ...
	I1227 20:53:43.814785  492447 start.go:242] waiting for startup goroutines ...
	I1227 20:53:43.814793  492447 start.go:247] waiting for cluster config update ...
	I1227 20:53:43.814804  492447 start.go:256] writing updated cluster config ...
	I1227 20:53:43.815079  492447 ssh_runner.go:195] Run: rm -f paused
	I1227 20:53:43.818622  492447 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:53:43.887173  492447 pod_ready.go:83] waiting for pod "coredns-7d764666f9-7wf76" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 20:53:45.896737  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:53:48.392906  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:53:50.393717  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:53:52.893301  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:53:55.392767  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:53:57.393378  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:53:59.892227  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:54:01.893372  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:54:04.392090  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:54:06.392663  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:54:08.893521  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:54:11.393673  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:54:13.892233  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:54:15.893035  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:54:18.393163  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:54:20.892619  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	W1227 20:54:22.893235  492447 pod_ready.go:104] pod "coredns-7d764666f9-7wf76" is not "Ready", error: <nil>
	I1227 20:54:24.892347  492447 pod_ready.go:94] pod "coredns-7d764666f9-7wf76" is "Ready"
	I1227 20:54:24.892376  492447 pod_ready.go:86] duration metric: took 41.0051772s for pod "coredns-7d764666f9-7wf76" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:24.894962  492447 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:24.898962  492447 pod_ready.go:94] pod "etcd-default-k8s-diff-port-058924" is "Ready"
	I1227 20:54:24.898992  492447 pod_ready.go:86] duration metric: took 4.001446ms for pod "etcd-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:24.901164  492447 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:24.905254  492447 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-058924" is "Ready"
	I1227 20:54:24.905282  492447 pod_ready.go:86] duration metric: took 4.092693ms for pod "kube-apiserver-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:24.907345  492447 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:25.090468  492447 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-058924" is "Ready"
	I1227 20:54:25.090498  492447 pod_ready.go:86] duration metric: took 183.127736ms for pod "kube-controller-manager-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:25.290434  492447 pod_ready.go:83] waiting for pod "kube-proxy-m2mtv" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:25.691193  492447 pod_ready.go:94] pod "kube-proxy-m2mtv" is "Ready"
	I1227 20:54:25.691222  492447 pod_ready.go:86] duration metric: took 400.758485ms for pod "kube-proxy-m2mtv" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:25.890102  492447 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:26.295125  492447 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-058924" is "Ready"
	I1227 20:54:26.295164  492447 pod_ready.go:86] duration metric: took 405.037792ms for pod "kube-scheduler-default-k8s-diff-port-058924" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:54:26.295177  492447 pod_ready.go:40] duration metric: took 42.476472247s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:54:26.349209  492447 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 20:54:26.352346  492447 out.go:203] 
	W1227 20:54:26.355413  492447 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 20:54:26.358320  492447 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:54:26.361302  492447 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-058924" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.090821615Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.093914984Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.094067382Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.094144467Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.097864871Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.098015086Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.098090506Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.101553317Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.10158507Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.101608708Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.105308863Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:54:23 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:23.105344858Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.330803372Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f85855b2-c068-4509-a94c-f280377766be name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.331690469Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=be005dd2-813b-4df0-bf16-4cb0d10d4acb name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.332587092Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf/dashboard-metrics-scraper" id=a5e7a954-6ce5-4e79-b979-761c93738116 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.332701624Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.339759903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.340276615Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.359370707Z" level=info msg="Created container 37c4318d86e79f02bbe58773a1a5fb7cfc6d4907577ccae135cbebe491459fd3: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf/dashboard-metrics-scraper" id=a5e7a954-6ce5-4e79-b979-761c93738116 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.362202849Z" level=info msg="Starting container: 37c4318d86e79f02bbe58773a1a5fb7cfc6d4907577ccae135cbebe491459fd3" id=a7eda10e-b72e-48f8-9a50-898de3788535 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:54:24 default-k8s-diff-port-058924 conmon[1708]: conmon 37c4318d86e79f02bbe5 <ninfo>: container 1711 exited with status 1
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.367978752Z" level=info msg="Started container" PID=1711 containerID=37c4318d86e79f02bbe58773a1a5fb7cfc6d4907577ccae135cbebe491459fd3 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf/dashboard-metrics-scraper id=a7eda10e-b72e-48f8-9a50-898de3788535 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a7946247412d2da16340f62a2c16da00ee3a6dacae7f9ec990290b1d10aa9a7
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.541610074Z" level=info msg="Removing container: 1f7f380f471fec7ea5567d77937a95a3f95bb124df7d8f75077be090e269c953" id=eb3d966c-996c-48d6-a393-7b314ce45ac3 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.548681932Z" level=info msg="Error loading conmon cgroup of container 1f7f380f471fec7ea5567d77937a95a3f95bb124df7d8f75077be090e269c953: cgroup deleted" id=eb3d966c-996c-48d6-a393-7b314ce45ac3 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:54:24 default-k8s-diff-port-058924 crio[656]: time="2025-12-27T20:54:24.55419084Z" level=info msg="Removed container 1f7f380f471fec7ea5567d77937a95a3f95bb124df7d8f75077be090e269c953: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf/dashboard-metrics-scraper" id=eb3d966c-996c-48d6-a393-7b314ce45ac3 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	37c4318d86e79       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago       Exited              dashboard-metrics-scraper   3                   3a7946247412d       dashboard-metrics-scraper-867fb5f87b-ggllf             kubernetes-dashboard
	947ffc68c455a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           29 seconds ago       Running             storage-provisioner         2                   ffd616ce150da       storage-provisioner                                    kube-system
	cf047da0f34ba       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   48 seconds ago       Running             kubernetes-dashboard        0                   a9674efb7562d       kubernetes-dashboard-b84665fb8-l99xd                   kubernetes-dashboard
	ec52181844fe0       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           About a minute ago   Running             coredns                     1                   940d4cd2f1a62       coredns-7d764666f9-7wf76                               kube-system
	3a0f52a8ae08b       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   58ce9167da7f8       busybox                                                default
	38227fcb6aa73       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   ffd616ce150da       storage-provisioner                                    kube-system
	4a06bde369d1e       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           About a minute ago   Running             kindnet-cni                 1                   d49b2cc1986d0       kindnet-8clbx                                          kube-system
	84c8ca768685e       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           About a minute ago   Running             kube-proxy                  1                   50d22c81e4c85       kube-proxy-m2mtv                                       kube-system
	3f3183a409a49       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           About a minute ago   Running             etcd                        1                   0e0ce44035982       etcd-default-k8s-diff-port-058924                      kube-system
	8fd1fa375b3be       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           About a minute ago   Running             kube-scheduler              1                   2918271f1f51b       kube-scheduler-default-k8s-diff-port-058924            kube-system
	59e2430d76d4f       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           About a minute ago   Running             kube-controller-manager     1                   29c08664457fb       kube-controller-manager-default-k8s-diff-port-058924   kube-system
	0284e66103318       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           About a minute ago   Running             kube-apiserver              1                   d83d7557592a6       kube-apiserver-default-k8s-diff-port-058924            kube-system
	
	
	==> coredns [ec52181844fe079b145958d700f1e1fe5dcf49cb2b9cc8579e21772009c71688] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:58807 - 33705 "HINFO IN 7918744130668000058.3578085914535680865. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013731508s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-058924
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-058924
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=default-k8s-diff-port-058924
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_52_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:52:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-058924
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:54:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:54:33 +0000   Sat, 27 Dec 2025 20:52:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:54:33 +0000   Sat, 27 Dec 2025 20:52:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:54:33 +0000   Sat, 27 Dec 2025 20:52:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:54:33 +0000   Sat, 27 Dec 2025 20:53:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-058924
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                c6cef3de-c29f-4e64-acd9-52f541b38c56
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-7d764666f9-7wf76                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-default-k8s-diff-port-058924                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         117s
	  kube-system                 kindnet-8clbx                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-default-k8s-diff-port-058924             250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-058924    200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-m2mtv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-default-k8s-diff-port-058924             100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-ggllf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-l99xd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  113s  node-controller  Node default-k8s-diff-port-058924 event: Registered Node default-k8s-diff-port-058924 in Controller
	  Normal  RegisteredNode  58s   node-controller  Node default-k8s-diff-port-058924 event: Registered Node default-k8s-diff-port-058924 in Controller
	
	
	==> dmesg <==
	[ +36.244108] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 20:22] overlayfs: idmapped layers are currently not supported
	[Dec27 20:23] overlayfs: idmapped layers are currently not supported
	[Dec27 20:24] overlayfs: idmapped layers are currently not supported
	[Dec27 20:25] overlayfs: idmapped layers are currently not supported
	[ +35.447549] overlayfs: idmapped layers are currently not supported
	[Dec27 20:26] overlayfs: idmapped layers are currently not supported
	[Dec27 20:27] overlayfs: idmapped layers are currently not supported
	[  +6.770645] overlayfs: idmapped layers are currently not supported
	[Dec27 20:28] overlayfs: idmapped layers are currently not supported
	[ +25.872751] overlayfs: idmapped layers are currently not supported
	[Dec27 20:29] overlayfs: idmapped layers are currently not supported
	[ +32.997137] overlayfs: idmapped layers are currently not supported
	[Dec27 20:31] overlayfs: idmapped layers are currently not supported
	[Dec27 20:33] overlayfs: idmapped layers are currently not supported
	[ +33.772475] overlayfs: idmapped layers are currently not supported
	[Dec27 20:39] overlayfs: idmapped layers are currently not supported
	[Dec27 20:40] overlayfs: idmapped layers are currently not supported
	[Dec27 20:44] overlayfs: idmapped layers are currently not supported
	[Dec27 20:45] overlayfs: idmapped layers are currently not supported
	[Dec27 20:49] overlayfs: idmapped layers are currently not supported
	[Dec27 20:50] overlayfs: idmapped layers are currently not supported
	[Dec27 20:51] overlayfs: idmapped layers are currently not supported
	[Dec27 20:52] overlayfs: idmapped layers are currently not supported
	[Dec27 20:53] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3f3183a409a49620f674f6cfc989ab37a01f933a3e0df3233dd981b0090f1459] <==
	{"level":"info","ts":"2025-12-27T20:53:39.602411Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:53:39.602430Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:53:39.606364Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T20:53:39.606525Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T20:53:39.606625Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T20:53:39.614045Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T20:53:39.614157Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T20:53:39.895011Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:53:39.900853Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:53:39.900942Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:53:39.900962Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:53:39.900977Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:53:39.909505Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:53:39.909596Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:53:39.909640Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:53:39.910222Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:53:39.912395Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-diff-port-058924 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:53:39.912609Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:53:39.913777Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:53:39.921341Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:53:39.921678Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:53:39.921773Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:53:39.921961Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:53:39.926312Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:53:39.938028Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 20:54:43 up  2:37,  0 user,  load average: 0.85, 1.43, 1.73
	Linux default-k8s-diff-port-058924 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4a06bde369d1ed3ed4adcb37795d14e35cc6d700346f1d8735f87fcc17f5acbd] <==
	I1227 20:53:42.845156       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:53:42.845513       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 20:53:42.845705       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:53:42.845718       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:53:42.845726       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:53:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:53:43.130011       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:53:43.130039       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:53:43.130048       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:53:43.130430       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 20:54:13.080555       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1227 20:54:13.131112       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 20:54:13.131113       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 20:54:13.131207       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1227 20:54:14.730324       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:54:14.730358       1 metrics.go:72] Registering metrics
	I1227 20:54:14.730440       1 controller.go:711] "Syncing nftables rules"
	I1227 20:54:23.087017       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:54:23.087071       1 main.go:301] handling current node
	I1227 20:54:33.083166       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:54:33.083221       1 main.go:301] handling current node
	I1227 20:54:43.085716       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:54:43.085744       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0284e6610331862ca822c0a398c4ebd7e2be94ee4e5d1cb1536b5202a533ba8b] <==
	I1227 20:53:42.111734       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:42.155143       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 20:53:42.173679       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:53:42.179123       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 20:53:42.179158       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 20:53:42.181503       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 20:53:42.181701       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:53:42.183888       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 20:53:42.185956       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 20:53:42.194381       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:53:42.206851       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:42.206890       1 policy_source.go:248] refreshing policies
	I1227 20:53:42.212849       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 20:53:42.269125       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:53:42.338836       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:53:42.923996       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:53:43.434774       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:53:43.560718       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:53:43.620990       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:53:43.637993       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:53:43.727514       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.161.31"}
	I1227 20:53:43.760078       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.250.64"}
	I1227 20:53:45.456574       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:53:45.655647       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:53:45.794252       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [59e2430d76d4f0fe1a76d346425874373d6f2997b74d9256e8d9c44383dd8e9c] <==
	I1227 20:53:45.210603       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199217       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.195104       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199279       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199287       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199296       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199268       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199347       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199355       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199363       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199369       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199375       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199381       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199390       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.234052       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 20:53:45.234190       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-058924"
	I1227 20:53:45.234290       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 20:53:45.199338       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199397       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199423       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.199431       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.269671       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.284597       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:45.284819       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:53:45.284835       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [84c8ca768685ecc278641733fdb714032c8f936c2c0791b5cd5d1aa930606977] <==
	I1227 20:53:42.899423       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:53:43.159225       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:53:43.361576       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:43.361623       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 20:53:43.361736       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:53:43.570937       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:53:43.571089       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:53:43.589087       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:53:43.589595       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:53:43.589625       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:53:43.591372       1 config.go:200] "Starting service config controller"
	I1227 20:53:43.591393       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:53:43.597639       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:53:43.597719       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:53:43.597765       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:53:43.597793       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:53:43.598328       1 config.go:309] "Starting node config controller"
	I1227 20:53:43.598431       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:53:43.598463       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:53:43.693410       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:53:43.701236       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 20:53:43.701277       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8fd1fa375b3beeeb401d83d95cfb1937027fe068753413aea7fd26b9455cd5a0] <==
	I1227 20:53:39.906248       1 serving.go:386] Generated self-signed cert in-memory
	I1227 20:53:42.209339       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 20:53:42.209383       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:53:42.222948       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 20:53:42.222978       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1227 20:53:42.223016       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:53:42.223056       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:53:42.223076       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:53:42.223103       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 20:53:42.223133       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1227 20:53:42.223142       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:53:42.324015       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:42.324469       1 shared_informer.go:377] "Caches are synced"
	I1227 20:53:42.325367       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:53:58 default-k8s-diff-port-058924 kubelet[785]: I1227 20:53:58.466375     785 scope.go:122] "RemoveContainer" containerID="f90cdefc113bcc1729c3dd72de47cfa9650d0f81da1efbb86d4429b7e6b9a684"
	Dec 27 20:53:58 default-k8s-diff-port-058924 kubelet[785]: E1227 20:53:58.466529     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ggllf_kubernetes-dashboard(ef835eaf-9888-41cc-b525-faa9d4b2b6a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" podUID="ef835eaf-9888-41cc-b525-faa9d4b2b6a6"
	Dec 27 20:54:01 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:01.328144     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" containerName="dashboard-metrics-scraper"
	Dec 27 20:54:01 default-k8s-diff-port-058924 kubelet[785]: I1227 20:54:01.328190     785 scope.go:122] "RemoveContainer" containerID="f90cdefc113bcc1729c3dd72de47cfa9650d0f81da1efbb86d4429b7e6b9a684"
	Dec 27 20:54:01 default-k8s-diff-port-058924 kubelet[785]: I1227 20:54:01.479768     785 scope.go:122] "RemoveContainer" containerID="f90cdefc113bcc1729c3dd72de47cfa9650d0f81da1efbb86d4429b7e6b9a684"
	Dec 27 20:54:01 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:01.480147     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" containerName="dashboard-metrics-scraper"
	Dec 27 20:54:01 default-k8s-diff-port-058924 kubelet[785]: I1227 20:54:01.480187     785 scope.go:122] "RemoveContainer" containerID="1f7f380f471fec7ea5567d77937a95a3f95bb124df7d8f75077be090e269c953"
	Dec 27 20:54:01 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:01.480367     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ggllf_kubernetes-dashboard(ef835eaf-9888-41cc-b525-faa9d4b2b6a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" podUID="ef835eaf-9888-41cc-b525-faa9d4b2b6a6"
	Dec 27 20:54:08 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:08.466212     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" containerName="dashboard-metrics-scraper"
	Dec 27 20:54:08 default-k8s-diff-port-058924 kubelet[785]: I1227 20:54:08.466719     785 scope.go:122] "RemoveContainer" containerID="1f7f380f471fec7ea5567d77937a95a3f95bb124df7d8f75077be090e269c953"
	Dec 27 20:54:08 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:08.466957     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ggllf_kubernetes-dashboard(ef835eaf-9888-41cc-b525-faa9d4b2b6a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" podUID="ef835eaf-9888-41cc-b525-faa9d4b2b6a6"
	Dec 27 20:54:13 default-k8s-diff-port-058924 kubelet[785]: I1227 20:54:13.512081     785 scope.go:122] "RemoveContainer" containerID="38227fcb6aa735a8d71a2d3a9cbddcc31e2b4e9f35fd1f6705090de387c7487f"
	Dec 27 20:54:24 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:24.330185     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" containerName="dashboard-metrics-scraper"
	Dec 27 20:54:24 default-k8s-diff-port-058924 kubelet[785]: I1227 20:54:24.330224     785 scope.go:122] "RemoveContainer" containerID="1f7f380f471fec7ea5567d77937a95a3f95bb124df7d8f75077be090e269c953"
	Dec 27 20:54:24 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:24.414095     785 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-7wf76" containerName="coredns"
	Dec 27 20:54:24 default-k8s-diff-port-058924 kubelet[785]: I1227 20:54:24.540171     785 scope.go:122] "RemoveContainer" containerID="1f7f380f471fec7ea5567d77937a95a3f95bb124df7d8f75077be090e269c953"
	Dec 27 20:54:25 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:25.544604     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" containerName="dashboard-metrics-scraper"
	Dec 27 20:54:25 default-k8s-diff-port-058924 kubelet[785]: I1227 20:54:25.545046     785 scope.go:122] "RemoveContainer" containerID="37c4318d86e79f02bbe58773a1a5fb7cfc6d4907577ccae135cbebe491459fd3"
	Dec 27 20:54:25 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:25.545228     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ggllf_kubernetes-dashboard(ef835eaf-9888-41cc-b525-faa9d4b2b6a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" podUID="ef835eaf-9888-41cc-b525-faa9d4b2b6a6"
	Dec 27 20:54:28 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:28.466609     785 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" containerName="dashboard-metrics-scraper"
	Dec 27 20:54:28 default-k8s-diff-port-058924 kubelet[785]: I1227 20:54:28.466660     785 scope.go:122] "RemoveContainer" containerID="37c4318d86e79f02bbe58773a1a5fb7cfc6d4907577ccae135cbebe491459fd3"
	Dec 27 20:54:28 default-k8s-diff-port-058924 kubelet[785]: E1227 20:54:28.466812     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ggllf_kubernetes-dashboard(ef835eaf-9888-41cc-b525-faa9d4b2b6a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ggllf" podUID="ef835eaf-9888-41cc-b525-faa9d4b2b6a6"
	Dec 27 20:54:38 default-k8s-diff-port-058924 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:54:38 default-k8s-diff-port-058924 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:54:38 default-k8s-diff-port-058924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [cf047da0f34ba382065ad57a1c14715622f4e810d1c6aa2686329d2f26c64f9d] <==
	2025/12/27 20:53:54 Using namespace: kubernetes-dashboard
	2025/12/27 20:53:54 Using in-cluster config to connect to apiserver
	2025/12/27 20:53:54 Using secret token for csrf signing
	2025/12/27 20:53:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 20:53:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 20:53:54 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 20:53:54 Generating JWE encryption key
	2025/12/27 20:53:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 20:53:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 20:53:54 Initializing JWE encryption key from synchronized object
	2025/12/27 20:53:54 Creating in-cluster Sidecar client
	2025/12/27 20:53:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:53:54 Serving insecurely on HTTP port: 9090
	2025/12/27 20:54:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:53:54 Starting overwatch
	
	
	==> storage-provisioner [38227fcb6aa735a8d71a2d3a9cbddcc31e2b4e9f35fd1f6705090de387c7487f] <==
	I1227 20:53:43.048495       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 20:54:13.051083       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [947ffc68c455a66d413ac0e1a88286d4999d302eb9862e8dbaa8bdca4e9962f5] <==
	W1227 20:54:13.573181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:17.028385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:21.288540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:24.886864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:27.940028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:30.962509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:30.967100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:54:30.967242       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:54:30.967406       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-058924_18d4d391-fbc4-4c96-b275-e2e9fe0afaf3!
	I1227 20:54:30.969190       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"88bc2d5f-416e-4b56-8bda-cffc96e439d9", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-058924_18d4d391-fbc4-4c96-b275-e2e9fe0afaf3 became leader
	W1227 20:54:30.972940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:30.977682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:54:31.068194       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-058924_18d4d391-fbc4-4c96-b275-e2e9fe0afaf3!
	W1227 20:54:32.980690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:32.985721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:34.988842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:34.995557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:36.998812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:37.003644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:39.007211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:39.023078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:41.026081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:41.031909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:43.039680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:54:43.045188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-058924 -n default-k8s-diff-port-058924
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-058924 -n default-k8s-diff-port-058924: exit status 2 (372.930879ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-058924 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-193865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-193865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (252.500061ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:55:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-193865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-193865 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-193865 describe deploy/metrics-server -n kube-system: exit status 1 (90.292118ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-193865 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-193865
helpers_test.go:244: (dbg) docker inspect embed-certs-193865:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9",
	        "Created": "2025-12-27T20:54:52.231777017Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 497054,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:54:52.294230964Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9/hostname",
	        "HostsPath": "/var/lib/docker/containers/910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9/hosts",
	        "LogPath": "/var/lib/docker/containers/910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9/910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9-json.log",
	        "Name": "/embed-certs-193865",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-193865:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-193865",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9",
	                "LowerDir": "/var/lib/docker/overlay2/fdd0efe955c9b2f82090fe0b88aba3b05df41490a2ac55c7669ec25ea57da42f-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdd0efe955c9b2f82090fe0b88aba3b05df41490a2ac55c7669ec25ea57da42f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdd0efe955c9b2f82090fe0b88aba3b05df41490a2ac55c7669ec25ea57da42f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdd0efe955c9b2f82090fe0b88aba3b05df41490a2ac55c7669ec25ea57da42f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-193865",
	                "Source": "/var/lib/docker/volumes/embed-certs-193865/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-193865",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-193865",
	                "name.minikube.sigs.k8s.io": "embed-certs-193865",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a3882363d97048f0f3aa927749ed31f09939e1040c0951612196175c9757b4b",
	            "SandboxKey": "/var/run/docker/netns/2a3882363d97",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-193865": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:9e:5a:62:fd:0e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "58b23b0ff82a7c2d13a32fdf89113eb222c2e15062269f5db64ae246b28bdf6b",
	                    "EndpointID": "14c3e76508e64d0b07bb3f39f7c191f3ce1ddc463f5a03528a3720f5170fadd5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-193865",
	                        "910081dd96e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-193865 -n embed-certs-193865
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-193865 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-193865 logs -n 25: (1.155489832s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p force-systemd-env-859716                                                                                                                                                                                                                   │ force-systemd-env-859716     │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ start   │ -p cert-options-765175 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-765175          │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ ssh     │ cert-options-765175 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-765175          │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ ssh     │ -p cert-options-765175 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-765175          │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ delete  │ -p cert-options-765175                                                                                                                                                                                                                        │ cert-options-765175          │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ start   │ -p old-k8s-version-855707 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-855707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │                     │
	│ stop    │ -p old-k8s-version-855707 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │ 27 Dec 25 20:51 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-855707 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:51 UTC │ 27 Dec 25 20:51 UTC │
	│ start   │ -p old-k8s-version-855707 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:51 UTC │ 27 Dec 25 20:52 UTC │
	│ image   │ old-k8s-version-855707 image list --format=json                                                                                                                                                                                               │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ pause   │ -p old-k8s-version-855707 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │                     │
	│ delete  │ -p old-k8s-version-855707                                                                                                                                                                                                                     │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ delete  │ -p old-k8s-version-855707                                                                                                                                                                                                                     │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ start   │ -p default-k8s-diff-port-058924 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:53 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-058924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-058924 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-058924 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
	│ start   │ -p default-k8s-diff-port-058924 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:54 UTC │
	│ image   │ default-k8s-diff-port-058924 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ pause   │ -p default-k8s-diff-port-058924 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-058924                                                                                                                                                                                                               │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ delete  │ -p default-k8s-diff-port-058924                                                                                                                                                                                                               │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ start   │ -p embed-certs-193865 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-193865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:54:47
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:54:47.371213  496630 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:54:47.371411  496630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:54:47.371423  496630 out.go:374] Setting ErrFile to fd 2...
	I1227 20:54:47.371429  496630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:54:47.371748  496630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:54:47.372266  496630 out.go:368] Setting JSON to false
	I1227 20:54:47.373274  496630 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9440,"bootTime":1766859448,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:54:47.373352  496630 start.go:143] virtualization:  
	I1227 20:54:47.377767  496630 out.go:179] * [embed-certs-193865] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:54:47.382366  496630 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:54:47.382480  496630 notify.go:221] Checking for updates...
	I1227 20:54:47.389279  496630 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:54:47.392601  496630 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:54:47.395885  496630 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:54:47.399163  496630 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:54:47.402339  496630 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:54:47.405960  496630 config.go:182] Loaded profile config "force-systemd-flag-604544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:54:47.406092  496630 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:54:47.438982  496630 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:54:47.439088  496630 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:54:47.496469  496630 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:54:47.486422135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:54:47.496577  496630 docker.go:319] overlay module found
	I1227 20:54:47.499704  496630 out.go:179] * Using the docker driver based on user configuration
	I1227 20:54:47.502574  496630 start.go:309] selected driver: docker
	I1227 20:54:47.502595  496630 start.go:928] validating driver "docker" against <nil>
	I1227 20:54:47.502608  496630 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:54:47.503304  496630 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:54:47.554359  496630 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:54:47.545637788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:54:47.554520  496630 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 20:54:47.554760  496630 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:54:47.557835  496630 out.go:179] * Using Docker driver with root privileges
	I1227 20:54:47.560646  496630 cni.go:84] Creating CNI manager for ""
	I1227 20:54:47.560714  496630 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:54:47.560728  496630 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 20:54:47.560813  496630 start.go:353] cluster config:
	{Name:embed-certs-193865 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-193865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:54:47.563884  496630 out.go:179] * Starting "embed-certs-193865" primary control-plane node in "embed-certs-193865" cluster
	I1227 20:54:47.566732  496630 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:54:47.569700  496630 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:54:47.572615  496630 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:54:47.572661  496630 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:54:47.572675  496630 cache.go:65] Caching tarball of preloaded images
	I1227 20:54:47.572706  496630 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:54:47.572771  496630 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:54:47.572780  496630 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:54:47.572889  496630 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/config.json ...
	I1227 20:54:47.572905  496630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/config.json: {Name:mkfb98132012fd98406e168238e15e6808887cad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:54:47.591724  496630 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:54:47.591747  496630 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:54:47.591762  496630 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:54:47.591790  496630 start.go:360] acquireMachinesLock for embed-certs-193865: {Name:mkc50e87a609f0ebbab428159240cc886136162f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:54:47.591890  496630 start.go:364] duration metric: took 79.842µs to acquireMachinesLock for "embed-certs-193865"
	I1227 20:54:47.591921  496630 start.go:93] Provisioning new machine with config: &{Name:embed-certs-193865 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-193865 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:54:47.591994  496630 start.go:125] createHost starting for "" (driver="docker")
	I1227 20:54:47.595367  496630 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:54:47.595596  496630 start.go:159] libmachine.API.Create for "embed-certs-193865" (driver="docker")
	I1227 20:54:47.595633  496630 client.go:173] LocalClient.Create starting
	I1227 20:54:47.595707  496630 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem
	I1227 20:54:47.595742  496630 main.go:144] libmachine: Decoding PEM data...
	I1227 20:54:47.595760  496630 main.go:144] libmachine: Parsing certificate...
	I1227 20:54:47.595819  496630 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem
	I1227 20:54:47.595842  496630 main.go:144] libmachine: Decoding PEM data...
	I1227 20:54:47.595861  496630 main.go:144] libmachine: Parsing certificate...
	I1227 20:54:47.596207  496630 cli_runner.go:164] Run: docker network inspect embed-certs-193865 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:54:47.611906  496630 cli_runner.go:211] docker network inspect embed-certs-193865 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:54:47.611996  496630 network_create.go:284] running [docker network inspect embed-certs-193865] to gather additional debugging logs...
	I1227 20:54:47.612019  496630 cli_runner.go:164] Run: docker network inspect embed-certs-193865
	W1227 20:54:47.629196  496630 cli_runner.go:211] docker network inspect embed-certs-193865 returned with exit code 1
	I1227 20:54:47.629233  496630 network_create.go:287] error running [docker network inspect embed-certs-193865]: docker network inspect embed-certs-193865: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-193865 not found
	I1227 20:54:47.629246  496630 network_create.go:289] output of [docker network inspect embed-certs-193865]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-193865 not found
	
	** /stderr **
	I1227 20:54:47.629344  496630 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:54:47.646071  496630 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9521cb9225c5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:1d:ef:38:b7:a6} reservation:<nil>}
	I1227 20:54:47.646447  496630 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-68d11cc2ab47 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:8d:ad:37:cb:fe} reservation:<nil>}
	I1227 20:54:47.646729  496630 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d3b7cfff4895 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:4a:e3:08:10:2f} reservation:<nil>}
	I1227 20:54:47.647215  496630 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a440f0}
	I1227 20:54:47.647246  496630 network_create.go:124] attempt to create docker network embed-certs-193865 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 20:54:47.647301  496630 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-193865 embed-certs-193865
	I1227 20:54:47.706167  496630 network_create.go:108] docker network embed-certs-193865 192.168.76.0/24 created
	I1227 20:54:47.706199  496630 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-193865" container
	I1227 20:54:47.706271  496630 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:54:47.722789  496630 cli_runner.go:164] Run: docker volume create embed-certs-193865 --label name.minikube.sigs.k8s.io=embed-certs-193865 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:54:47.739281  496630 oci.go:103] Successfully created a docker volume embed-certs-193865
	I1227 20:54:47.739377  496630 cli_runner.go:164] Run: docker run --rm --name embed-certs-193865-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-193865 --entrypoint /usr/bin/test -v embed-certs-193865:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:54:48.278697  496630 oci.go:107] Successfully prepared a docker volume embed-certs-193865
	I1227 20:54:48.278757  496630 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:54:48.278766  496630 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 20:54:48.278831  496630 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-193865:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 20:54:52.158272  496630 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-193865:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.879394306s)
	I1227 20:54:52.158314  496630 kic.go:203] duration metric: took 3.879544267s to extract preloaded images to volume ...
	W1227 20:54:52.158438  496630 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 20:54:52.158548  496630 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:54:52.216213  496630 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-193865 --name embed-certs-193865 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-193865 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-193865 --network embed-certs-193865 --ip 192.168.76.2 --volume embed-certs-193865:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:54:52.530020  496630 cli_runner.go:164] Run: docker container inspect embed-certs-193865 --format={{.State.Running}}
	I1227 20:54:52.547748  496630 cli_runner.go:164] Run: docker container inspect embed-certs-193865 --format={{.State.Status}}
	I1227 20:54:52.576010  496630 cli_runner.go:164] Run: docker exec embed-certs-193865 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:54:52.627377  496630 oci.go:144] the created container "embed-certs-193865" has a running status.
	I1227 20:54:52.627407  496630 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa...
	I1227 20:54:53.049289  496630 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:54:53.078867  496630 cli_runner.go:164] Run: docker container inspect embed-certs-193865 --format={{.State.Status}}
	I1227 20:54:53.108990  496630 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:54:53.109027  496630 kic_runner.go:114] Args: [docker exec --privileged embed-certs-193865 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:54:53.172685  496630 cli_runner.go:164] Run: docker container inspect embed-certs-193865 --format={{.State.Status}}
	I1227 20:54:53.197840  496630 machine.go:94] provisionDockerMachine start ...
	I1227 20:54:53.197921  496630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:54:53.224508  496630 main.go:144] libmachine: Using SSH client type: native
	I1227 20:54:53.224885  496630 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1227 20:54:53.224899  496630 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:54:53.429131  496630 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-193865
	
	I1227 20:54:53.429168  496630 ubuntu.go:182] provisioning hostname "embed-certs-193865"
	I1227 20:54:53.429257  496630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:54:53.447462  496630 main.go:144] libmachine: Using SSH client type: native
	I1227 20:54:53.447770  496630 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1227 20:54:53.447781  496630 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-193865 && echo "embed-certs-193865" | sudo tee /etc/hostname
	I1227 20:54:53.604199  496630 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-193865
	
	I1227 20:54:53.604275  496630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:54:53.622367  496630 main.go:144] libmachine: Using SSH client type: native
	I1227 20:54:53.622775  496630 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1227 20:54:53.622803  496630 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-193865' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-193865/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-193865' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:54:53.769537  496630 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:54:53.769568  496630 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:54:53.769638  496630 ubuntu.go:190] setting up certificates
	I1227 20:54:53.769648  496630 provision.go:84] configureAuth start
	I1227 20:54:53.769730  496630 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-193865
	I1227 20:54:53.786666  496630 provision.go:143] copyHostCerts
	I1227 20:54:53.786739  496630 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:54:53.786753  496630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:54:53.786831  496630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:54:53.786940  496630 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:54:53.786951  496630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:54:53.786983  496630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:54:53.787050  496630 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:54:53.787060  496630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:54:53.787086  496630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:54:53.787145  496630 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.embed-certs-193865 san=[127.0.0.1 192.168.76.2 embed-certs-193865 localhost minikube]
	I1227 20:54:53.909814  496630 provision.go:177] copyRemoteCerts
	I1227 20:54:53.909893  496630 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:54:53.909951  496630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:54:53.927107  496630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:54:54.026399  496630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:54:54.045668  496630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1227 20:54:54.064182  496630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:54:54.082629  496630 provision.go:87] duration metric: took 312.954746ms to configureAuth
	I1227 20:54:54.082656  496630 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:54:54.082865  496630 config.go:182] Loaded profile config "embed-certs-193865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:54:54.082969  496630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:54:54.100321  496630 main.go:144] libmachine: Using SSH client type: native
	I1227 20:54:54.100640  496630 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1227 20:54:54.100654  496630 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:54:54.388816  496630 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:54:54.388837  496630 machine.go:97] duration metric: took 1.19097675s to provisionDockerMachine
	I1227 20:54:54.388847  496630 client.go:176] duration metric: took 6.793202325s to LocalClient.Create
	I1227 20:54:54.388875  496630 start.go:167] duration metric: took 6.793265543s to libmachine.API.Create "embed-certs-193865"
	I1227 20:54:54.388882  496630 start.go:293] postStartSetup for "embed-certs-193865" (driver="docker")
	I1227 20:54:54.388892  496630 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:54:54.388954  496630 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:54:54.388990  496630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:54:54.405250  496630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:54:54.509305  496630 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:54:54.512738  496630 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:54:54.512766  496630 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:54:54.512779  496630 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:54:54.512835  496630 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:54:54.512916  496630 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:54:54.513022  496630 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:54:54.520349  496630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:54:54.537625  496630 start.go:296] duration metric: took 148.7287ms for postStartSetup
	I1227 20:54:54.538009  496630 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-193865
	I1227 20:54:54.554545  496630 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/config.json ...
	I1227 20:54:54.554829  496630 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:54:54.554876  496630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:54:54.576405  496630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:54:54.674452  496630 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:54:54.678895  496630 start.go:128] duration metric: took 7.086886817s to createHost
	I1227 20:54:54.678918  496630 start.go:83] releasing machines lock for "embed-certs-193865", held for 7.087014117s
	I1227 20:54:54.678989  496630 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-193865
	I1227 20:54:54.694962  496630 ssh_runner.go:195] Run: cat /version.json
	I1227 20:54:54.694983  496630 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:54:54.695011  496630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:54:54.695036  496630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:54:54.717674  496630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:54:54.731704  496630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:54:54.933590  496630 ssh_runner.go:195] Run: systemctl --version
	I1227 20:54:54.940068  496630 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:54:54.977329  496630 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:54:54.981912  496630 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:54:54.981983  496630 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:54:55.013008  496630 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 20:54:55.013115  496630 start.go:496] detecting cgroup driver to use...
	I1227 20:54:55.013168  496630 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:54:55.013240  496630 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:54:55.033276  496630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:54:55.046622  496630 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:54:55.046689  496630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:54:55.064632  496630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:54:55.083966  496630 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:54:55.200539  496630 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:54:55.318000  496630 docker.go:234] disabling docker service ...
	I1227 20:54:55.318109  496630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:54:55.339421  496630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:54:55.352657  496630 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:54:55.466357  496630 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:54:55.598013  496630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:54:55.611099  496630 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:54:55.624859  496630 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:54:55.624957  496630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:54:55.633536  496630 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:54:55.633631  496630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:54:55.642337  496630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:54:55.650703  496630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:54:55.659287  496630 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:54:55.667313  496630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:54:55.675658  496630 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:54:55.688620  496630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:54:55.697347  496630 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:54:55.704850  496630 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:54:55.712713  496630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:54:55.821964  496630 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:54:55.994133  496630 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:54:55.994241  496630 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:54:55.998076  496630 start.go:574] Will wait 60s for crictl version
	I1227 20:54:55.998173  496630 ssh_runner.go:195] Run: which crictl
	I1227 20:54:56.001734  496630 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:54:56.035587  496630 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:54:56.035757  496630 ssh_runner.go:195] Run: crio --version
	I1227 20:54:56.064988  496630 ssh_runner.go:195] Run: crio --version
	I1227 20:54:56.098899  496630 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:54:56.101774  496630 cli_runner.go:164] Run: docker network inspect embed-certs-193865 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:54:56.118175  496630 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 20:54:56.121822  496630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:54:56.132050  496630 kubeadm.go:884] updating cluster {Name:embed-certs-193865 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-193865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:54:56.132165  496630 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:54:56.132224  496630 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:54:56.167268  496630 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:54:56.167293  496630 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:54:56.167349  496630 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:54:56.191406  496630 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:54:56.191428  496630 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:54:56.191437  496630 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 20:54:56.191530  496630 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-193865 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-193865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:54:56.191611  496630 ssh_runner.go:195] Run: crio config
	I1227 20:54:56.264410  496630 cni.go:84] Creating CNI manager for ""
	I1227 20:54:56.264433  496630 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:54:56.264449  496630 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:54:56.264477  496630 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-193865 NodeName:embed-certs-193865 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:54:56.264636  496630 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-193865"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:54:56.264709  496630 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:54:56.273161  496630 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:54:56.273226  496630 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:54:56.282278  496630 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1227 20:54:56.294959  496630 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:54:56.307955  496630 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1227 20:54:56.320575  496630 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:54:56.324071  496630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:54:56.333849  496630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:54:56.458811  496630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:54:56.475561  496630 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865 for IP: 192.168.76.2
	I1227 20:54:56.475632  496630 certs.go:195] generating shared ca certs ...
	I1227 20:54:56.475663  496630 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:54:56.475855  496630 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:54:56.475945  496630 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:54:56.475968  496630 certs.go:257] generating profile certs ...
	I1227 20:54:56.476056  496630 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/client.key
	I1227 20:54:56.476108  496630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/client.crt with IP's: []
	I1227 20:54:56.667885  496630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/client.crt ...
	I1227 20:54:56.667918  496630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/client.crt: {Name:mk56ee718ea17ab8cbc9e4d1e50891d54601baf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:54:56.668116  496630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/client.key ...
	I1227 20:54:56.668129  496630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/client.key: {Name:mkf54c138c2cc3a9714b7e235d35e7ceeeeb51c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:54:56.668231  496630 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/apiserver.key.b049a295
	I1227 20:54:56.668249  496630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/apiserver.crt.b049a295 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 20:54:57.168482  496630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/apiserver.crt.b049a295 ...
	I1227 20:54:57.168515  496630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/apiserver.crt.b049a295: {Name:mk010398c3b3c3f9590006d3bce6f017c1e9b18c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:54:57.168708  496630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/apiserver.key.b049a295 ...
	I1227 20:54:57.168725  496630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/apiserver.key.b049a295: {Name:mk5bd845dd24e0266a623c6e07acf1984127527a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:54:57.168811  496630 certs.go:382] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/apiserver.crt.b049a295 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/apiserver.crt
	I1227 20:54:57.168887  496630 certs.go:386] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/apiserver.key.b049a295 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/apiserver.key
	I1227 20:54:57.168954  496630 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/proxy-client.key
	I1227 20:54:57.168974  496630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/proxy-client.crt with IP's: []
	I1227 20:54:57.260940  496630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/proxy-client.crt ...
	I1227 20:54:57.260969  496630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/proxy-client.crt: {Name:mkeec17e4fd453ee348c19afc5d7cc6bdc73e5da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:54:57.261138  496630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/proxy-client.key ...
	I1227 20:54:57.261167  496630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/proxy-client.key: {Name:mkc3cd0cd0595c71874ecc5fd5a4cdcee4ac9101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:54:57.261372  496630 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:54:57.261418  496630 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:54:57.261432  496630 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:54:57.261477  496630 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:54:57.261514  496630 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:54:57.261543  496630 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:54:57.261595  496630 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:54:57.262163  496630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:54:57.280580  496630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:54:57.298800  496630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:54:57.316008  496630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:54:57.332865  496630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1227 20:54:57.350399  496630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 20:54:57.367851  496630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:54:57.384709  496630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 20:54:57.401853  496630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:54:57.419039  496630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:54:57.435066  496630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:54:57.451841  496630 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:54:57.464694  496630 ssh_runner.go:195] Run: openssl version
	I1227 20:54:57.470777  496630 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:54:57.477919  496630 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:54:57.485207  496630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:54:57.488700  496630 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:54:57.488800  496630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:54:57.529705  496630 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:54:57.537363  496630 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 20:54:57.544984  496630 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:54:57.552495  496630 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:54:57.560058  496630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:54:57.563711  496630 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:54:57.563828  496630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:54:57.604700  496630 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:54:57.612007  496630 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/274336.pem /etc/ssl/certs/51391683.0
	I1227 20:54:57.619080  496630 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:54:57.626630  496630 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:54:57.634160  496630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:54:57.637743  496630 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:54:57.637825  496630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:54:57.678614  496630 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:54:57.686309  496630 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2743362.pem /etc/ssl/certs/3ec20f2e.0
	I1227 20:54:57.693677  496630 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:54:57.697218  496630 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:54:57.697275  496630 kubeadm.go:401] StartCluster: {Name:embed-certs-193865 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-193865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:54:57.697349  496630 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:54:57.697419  496630 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:54:57.740492  496630 cri.go:96] found id: ""
	I1227 20:54:57.740564  496630 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:54:57.751731  496630 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 20:54:57.760186  496630 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:54:57.760278  496630 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:54:57.770487  496630 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:54:57.770504  496630 kubeadm.go:158] found existing configuration files:
	
	I1227 20:54:57.770585  496630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:54:57.779255  496630 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:54:57.779317  496630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:54:57.786958  496630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:54:57.794646  496630 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:54:57.794751  496630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:54:57.802562  496630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:54:57.810003  496630 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:54:57.810083  496630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:54:57.817223  496630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:54:57.824897  496630 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:54:57.824963  496630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:54:57.832167  496630 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:54:57.869158  496630 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:54:57.869222  496630 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:54:57.966507  496630 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:54:57.966600  496630 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 20:54:57.966639  496630 kubeadm.go:319] OS: Linux
	I1227 20:54:57.966686  496630 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:54:57.966736  496630 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 20:54:57.966784  496630 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:54:57.966833  496630 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:54:57.966881  496630 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:54:57.966932  496630 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:54:57.966979  496630 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:54:57.967029  496630 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:54:57.967076  496630 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 20:54:58.034636  496630 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:54:58.034772  496630 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:54:58.035278  496630 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:54:58.042788  496630 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 20:54:58.046664  496630 out.go:252]   - Generating certificates and keys ...
	I1227 20:54:58.046760  496630 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:54:58.046831  496630 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:54:58.156911  496630 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 20:54:58.330706  496630 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 20:54:58.636433  496630 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 20:54:58.692411  496630 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 20:54:58.760214  496630 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 20:54:58.760556  496630 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-193865 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 20:54:59.069982  496630 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 20:54:59.070135  496630 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-193865 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 20:54:59.142347  496630 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 20:54:59.278181  496630 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 20:54:59.569236  496630 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 20:54:59.569516  496630 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:54:59.881403  496630 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:55:00.350975  496630 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:55:00.567817  496630 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:55:00.760009  496630 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:55:00.905185  496630 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:55:00.905915  496630 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:55:00.908896  496630 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:55:00.912537  496630 out.go:252]   - Booting up control plane ...
	I1227 20:55:00.912669  496630 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:55:00.912757  496630 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:55:00.913219  496630 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:55:00.931255  496630 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:55:00.931370  496630 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:55:00.940789  496630 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:55:00.941152  496630 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:55:00.941294  496630 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:55:01.073440  496630 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:55:01.073688  496630 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 20:55:02.073841  496630 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001755959s
	I1227 20:55:02.077700  496630 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 20:55:02.077862  496630 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1227 20:55:02.078008  496630 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 20:55:02.078213  496630 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 20:55:03.104115  496630 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.02592726s
	I1227 20:55:05.174435  496630 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.09671912s
	I1227 20:55:07.079499  496630 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.00155068s
	I1227 20:55:07.116617  496630 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 20:55:07.134476  496630 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 20:55:07.152668  496630 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 20:55:07.152872  496630 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-193865 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 20:55:07.174529  496630 kubeadm.go:319] [bootstrap-token] Using token: uxztxe.v0dijzrsjf3h0ik7
	I1227 20:55:07.177418  496630 out.go:252]   - Configuring RBAC rules ...
	I1227 20:55:07.177582  496630 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 20:55:07.187214  496630 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 20:55:07.195866  496630 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 20:55:07.202963  496630 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 20:55:07.208461  496630 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 20:55:07.215686  496630 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 20:55:07.489240  496630 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 20:55:07.938359  496630 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 20:55:08.488378  496630 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 20:55:08.489617  496630 kubeadm.go:319] 
	I1227 20:55:08.489689  496630 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 20:55:08.489699  496630 kubeadm.go:319] 
	I1227 20:55:08.489772  496630 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 20:55:08.489781  496630 kubeadm.go:319] 
	I1227 20:55:08.489805  496630 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 20:55:08.489865  496630 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 20:55:08.489917  496630 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 20:55:08.489925  496630 kubeadm.go:319] 
	I1227 20:55:08.489976  496630 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 20:55:08.489985  496630 kubeadm.go:319] 
	I1227 20:55:08.490030  496630 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 20:55:08.490038  496630 kubeadm.go:319] 
	I1227 20:55:08.490087  496630 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 20:55:08.490162  496630 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 20:55:08.490230  496630 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 20:55:08.490238  496630 kubeadm.go:319] 
	I1227 20:55:08.490317  496630 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 20:55:08.490393  496630 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 20:55:08.490401  496630 kubeadm.go:319] 
	I1227 20:55:08.490480  496630 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token uxztxe.v0dijzrsjf3h0ik7 \
	I1227 20:55:08.490588  496630 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ff29328d1e0d612c7979c16c69d6042f5f31e931d111cc12c8320ed4e4ab5152 \
	I1227 20:55:08.490612  496630 kubeadm.go:319] 	--control-plane 
	I1227 20:55:08.490620  496630 kubeadm.go:319] 
	I1227 20:55:08.490701  496630 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 20:55:08.490709  496630 kubeadm.go:319] 
	I1227 20:55:08.490787  496630 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token uxztxe.v0dijzrsjf3h0ik7 \
	I1227 20:55:08.490887  496630 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ff29328d1e0d612c7979c16c69d6042f5f31e931d111cc12c8320ed4e4ab5152 
	I1227 20:55:08.494712  496630 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 20:55:08.495130  496630 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:55:08.495245  496630 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:55:08.495264  496630 cni.go:84] Creating CNI manager for ""
	I1227 20:55:08.495272  496630 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:55:08.498440  496630 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 20:55:08.501336  496630 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 20:55:08.505421  496630 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 20:55:08.505440  496630 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 20:55:08.519688  496630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 20:55:08.807749  496630 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 20:55:08.807877  496630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:55:08.807987  496630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-193865 minikube.k8s.io/updated_at=2025_12_27T20_55_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562 minikube.k8s.io/name=embed-certs-193865 minikube.k8s.io/primary=true
	I1227 20:55:08.974808  496630 ops.go:34] apiserver oom_adj: -16
	I1227 20:55:08.974930  496630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:55:09.475307  496630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:55:09.975250  496630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:55:10.475073  496630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:55:10.975877  496630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:55:11.475221  496630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:55:11.974989  496630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:55:12.475033  496630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:55:12.586964  496630 kubeadm.go:1114] duration metric: took 3.779134494s to wait for elevateKubeSystemPrivileges
	I1227 20:55:12.586992  496630 kubeadm.go:403] duration metric: took 14.889721684s to StartCluster
	I1227 20:55:12.587014  496630 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:55:12.587076  496630 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:55:12.588041  496630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:55:12.588264  496630 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:55:12.588377  496630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 20:55:12.588626  496630 config.go:182] Loaded profile config "embed-certs-193865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:55:12.588672  496630 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:55:12.588738  496630 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-193865"
	I1227 20:55:12.588756  496630 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-193865"
	I1227 20:55:12.588778  496630 host.go:66] Checking if "embed-certs-193865" exists ...
	I1227 20:55:12.589293  496630 addons.go:70] Setting default-storageclass=true in profile "embed-certs-193865"
	I1227 20:55:12.589317  496630 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-193865"
	I1227 20:55:12.589596  496630 cli_runner.go:164] Run: docker container inspect embed-certs-193865 --format={{.State.Status}}
	I1227 20:55:12.589801  496630 cli_runner.go:164] Run: docker container inspect embed-certs-193865 --format={{.State.Status}}
	I1227 20:55:12.592317  496630 out.go:179] * Verifying Kubernetes components...
	I1227 20:55:12.604869  496630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:55:12.628494  496630 addons.go:239] Setting addon default-storageclass=true in "embed-certs-193865"
	I1227 20:55:12.628534  496630 host.go:66] Checking if "embed-certs-193865" exists ...
	I1227 20:55:12.628743  496630 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:55:12.631497  496630 cli_runner.go:164] Run: docker container inspect embed-certs-193865 --format={{.State.Status}}
	I1227 20:55:12.631630  496630 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:55:12.631645  496630 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:55:12.631692  496630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:12.669497  496630 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:55:12.669518  496630 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:55:12.669578  496630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:12.679209  496630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:55:12.704584  496630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:55:13.024843  496630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:55:13.024967  496630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 20:55:13.030492  496630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:55:13.038646  496630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:55:13.659895  496630 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1227 20:55:13.660865  496630 node_ready.go:35] waiting up to 6m0s for node "embed-certs-193865" to be "Ready" ...
	I1227 20:55:14.062669  496630 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 20:55:14.065593  496630 addons.go:530] duration metric: took 1.476912969s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 20:55:14.167257  496630 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-193865" context rescaled to 1 replicas
	W1227 20:55:15.663627  496630 node_ready.go:57] node "embed-certs-193865" has "Ready":"False" status (will retry)
	W1227 20:55:17.664066  496630 node_ready.go:57] node "embed-certs-193865" has "Ready":"False" status (will retry)
	W1227 20:55:20.165054  496630 node_ready.go:57] node "embed-certs-193865" has "Ready":"False" status (will retry)
	W1227 20:55:22.663613  496630 node_ready.go:57] node "embed-certs-193865" has "Ready":"False" status (will retry)
	W1227 20:55:24.664325  496630 node_ready.go:57] node "embed-certs-193865" has "Ready":"False" status (will retry)
	I1227 20:55:26.163709  496630 node_ready.go:49] node "embed-certs-193865" is "Ready"
	I1227 20:55:26.163739  496630 node_ready.go:38] duration metric: took 12.502837392s for node "embed-certs-193865" to be "Ready" ...
	I1227 20:55:26.163752  496630 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:55:26.163811  496630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:55:26.175521  496630 api_server.go:72] duration metric: took 13.587221555s to wait for apiserver process to appear ...
	I1227 20:55:26.175549  496630 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:55:26.175569  496630 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:55:26.183509  496630 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 20:55:26.184603  496630 api_server.go:141] control plane version: v1.35.0
	I1227 20:55:26.184629  496630 api_server.go:131] duration metric: took 9.073643ms to wait for apiserver health ...
	I1227 20:55:26.184639  496630 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:55:26.188028  496630 system_pods.go:59] 8 kube-system pods found
	I1227 20:55:26.188106  496630 system_pods.go:61] "coredns-7d764666f9-xj2kx" [bb4db36b-a468-42ed-a57d-07d66fd3677f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:55:26.188119  496630 system_pods.go:61] "etcd-embed-certs-193865" [189ff29b-8bc4-4c48-8fd1-32e246482296] Running
	I1227 20:55:26.188126  496630 system_pods.go:61] "kindnet-fqnrt" [6f652890-8212-487a-a479-ac54591d0db0] Running
	I1227 20:55:26.188131  496630 system_pods.go:61] "kube-apiserver-embed-certs-193865" [e71ae5e1-2d4c-413e-990a-9c9b539d62fb] Running
	I1227 20:55:26.188136  496630 system_pods.go:61] "kube-controller-manager-embed-certs-193865" [ad24b721-376b-410f-8d96-c7b80585aa44] Running
	I1227 20:55:26.188143  496630 system_pods.go:61] "kube-proxy-5mf9z" [2c7bfa55-35a8-4519-8282-2bd750cbc449] Running
	I1227 20:55:26.188148  496630 system_pods.go:61] "kube-scheduler-embed-certs-193865" [c0d62251-0ec8-4ac3-a396-5574e3a92155] Running
	I1227 20:55:26.188155  496630 system_pods.go:61] "storage-provisioner" [eaf08c7a-30b7-4c09-a98e-4b9be46b8f8d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:55:26.188165  496630 system_pods.go:74] duration metric: took 3.520362ms to wait for pod list to return data ...
	I1227 20:55:26.188184  496630 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:55:26.190918  496630 default_sa.go:45] found service account: "default"
	I1227 20:55:26.190943  496630 default_sa.go:55] duration metric: took 2.753368ms for default service account to be created ...
	I1227 20:55:26.190953  496630 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:55:26.193733  496630 system_pods.go:86] 8 kube-system pods found
	I1227 20:55:26.193766  496630 system_pods.go:89] "coredns-7d764666f9-xj2kx" [bb4db36b-a468-42ed-a57d-07d66fd3677f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:55:26.193779  496630 system_pods.go:89] "etcd-embed-certs-193865" [189ff29b-8bc4-4c48-8fd1-32e246482296] Running
	I1227 20:55:26.193787  496630 system_pods.go:89] "kindnet-fqnrt" [6f652890-8212-487a-a479-ac54591d0db0] Running
	I1227 20:55:26.193792  496630 system_pods.go:89] "kube-apiserver-embed-certs-193865" [e71ae5e1-2d4c-413e-990a-9c9b539d62fb] Running
	I1227 20:55:26.193801  496630 system_pods.go:89] "kube-controller-manager-embed-certs-193865" [ad24b721-376b-410f-8d96-c7b80585aa44] Running
	I1227 20:55:26.193806  496630 system_pods.go:89] "kube-proxy-5mf9z" [2c7bfa55-35a8-4519-8282-2bd750cbc449] Running
	I1227 20:55:26.193814  496630 system_pods.go:89] "kube-scheduler-embed-certs-193865" [c0d62251-0ec8-4ac3-a396-5574e3a92155] Running
	I1227 20:55:26.193821  496630 system_pods.go:89] "storage-provisioner" [eaf08c7a-30b7-4c09-a98e-4b9be46b8f8d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:55:26.193850  496630 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1227 20:55:26.491318  496630 system_pods.go:86] 8 kube-system pods found
	I1227 20:55:26.491356  496630 system_pods.go:89] "coredns-7d764666f9-xj2kx" [bb4db36b-a468-42ed-a57d-07d66fd3677f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:55:26.491364  496630 system_pods.go:89] "etcd-embed-certs-193865" [189ff29b-8bc4-4c48-8fd1-32e246482296] Running
	I1227 20:55:26.491371  496630 system_pods.go:89] "kindnet-fqnrt" [6f652890-8212-487a-a479-ac54591d0db0] Running
	I1227 20:55:26.491376  496630 system_pods.go:89] "kube-apiserver-embed-certs-193865" [e71ae5e1-2d4c-413e-990a-9c9b539d62fb] Running
	I1227 20:55:26.491381  496630 system_pods.go:89] "kube-controller-manager-embed-certs-193865" [ad24b721-376b-410f-8d96-c7b80585aa44] Running
	I1227 20:55:26.491386  496630 system_pods.go:89] "kube-proxy-5mf9z" [2c7bfa55-35a8-4519-8282-2bd750cbc449] Running
	I1227 20:55:26.491391  496630 system_pods.go:89] "kube-scheduler-embed-certs-193865" [c0d62251-0ec8-4ac3-a396-5574e3a92155] Running
	I1227 20:55:26.491402  496630 system_pods.go:89] "storage-provisioner" [eaf08c7a-30b7-4c09-a98e-4b9be46b8f8d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:55:26.817291  496630 system_pods.go:86] 8 kube-system pods found
	I1227 20:55:26.817328  496630 system_pods.go:89] "coredns-7d764666f9-xj2kx" [bb4db36b-a468-42ed-a57d-07d66fd3677f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:55:26.817336  496630 system_pods.go:89] "etcd-embed-certs-193865" [189ff29b-8bc4-4c48-8fd1-32e246482296] Running
	I1227 20:55:26.817342  496630 system_pods.go:89] "kindnet-fqnrt" [6f652890-8212-487a-a479-ac54591d0db0] Running
	I1227 20:55:26.817347  496630 system_pods.go:89] "kube-apiserver-embed-certs-193865" [e71ae5e1-2d4c-413e-990a-9c9b539d62fb] Running
	I1227 20:55:26.817352  496630 system_pods.go:89] "kube-controller-manager-embed-certs-193865" [ad24b721-376b-410f-8d96-c7b80585aa44] Running
	I1227 20:55:26.817356  496630 system_pods.go:89] "kube-proxy-5mf9z" [2c7bfa55-35a8-4519-8282-2bd750cbc449] Running
	I1227 20:55:26.817361  496630 system_pods.go:89] "kube-scheduler-embed-certs-193865" [c0d62251-0ec8-4ac3-a396-5574e3a92155] Running
	I1227 20:55:26.817367  496630 system_pods.go:89] "storage-provisioner" [eaf08c7a-30b7-4c09-a98e-4b9be46b8f8d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:55:27.293000  496630 system_pods.go:86] 8 kube-system pods found
	I1227 20:55:27.293033  496630 system_pods.go:89] "coredns-7d764666f9-xj2kx" [bb4db36b-a468-42ed-a57d-07d66fd3677f] Running
	I1227 20:55:27.293042  496630 system_pods.go:89] "etcd-embed-certs-193865" [189ff29b-8bc4-4c48-8fd1-32e246482296] Running
	I1227 20:55:27.293047  496630 system_pods.go:89] "kindnet-fqnrt" [6f652890-8212-487a-a479-ac54591d0db0] Running
	I1227 20:55:27.293059  496630 system_pods.go:89] "kube-apiserver-embed-certs-193865" [e71ae5e1-2d4c-413e-990a-9c9b539d62fb] Running
	I1227 20:55:27.293065  496630 system_pods.go:89] "kube-controller-manager-embed-certs-193865" [ad24b721-376b-410f-8d96-c7b80585aa44] Running
	I1227 20:55:27.293070  496630 system_pods.go:89] "kube-proxy-5mf9z" [2c7bfa55-35a8-4519-8282-2bd750cbc449] Running
	I1227 20:55:27.293075  496630 system_pods.go:89] "kube-scheduler-embed-certs-193865" [c0d62251-0ec8-4ac3-a396-5574e3a92155] Running
	I1227 20:55:27.293083  496630 system_pods.go:89] "storage-provisioner" [eaf08c7a-30b7-4c09-a98e-4b9be46b8f8d] Running
	I1227 20:55:27.293092  496630 system_pods.go:126] duration metric: took 1.102132967s to wait for k8s-apps to be running ...
	I1227 20:55:27.293100  496630 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:55:27.293159  496630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:55:27.310984  496630 system_svc.go:56] duration metric: took 17.873627ms WaitForService to wait for kubelet
	I1227 20:55:27.311025  496630 kubeadm.go:587] duration metric: took 14.722730026s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:55:27.311044  496630 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:55:27.313939  496630 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:55:27.313974  496630 node_conditions.go:123] node cpu capacity is 2
	I1227 20:55:27.313986  496630 node_conditions.go:105] duration metric: took 2.936722ms to run NodePressure ...
	I1227 20:55:27.313999  496630 start.go:242] waiting for startup goroutines ...
	I1227 20:55:27.314016  496630 start.go:247] waiting for cluster config update ...
	I1227 20:55:27.314030  496630 start.go:256] writing updated cluster config ...
	I1227 20:55:27.314323  496630 ssh_runner.go:195] Run: rm -f paused
	I1227 20:55:27.317908  496630 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:55:27.321479  496630 pod_ready.go:83] waiting for pod "coredns-7d764666f9-xj2kx" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:55:27.326112  496630 pod_ready.go:94] pod "coredns-7d764666f9-xj2kx" is "Ready"
	I1227 20:55:27.326139  496630 pod_ready.go:86] duration metric: took 4.63343ms for pod "coredns-7d764666f9-xj2kx" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:55:27.328440  496630 pod_ready.go:83] waiting for pod "etcd-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:55:27.332938  496630 pod_ready.go:94] pod "etcd-embed-certs-193865" is "Ready"
	I1227 20:55:27.332967  496630 pod_ready.go:86] duration metric: took 4.506738ms for pod "etcd-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:55:27.335476  496630 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:55:27.339969  496630 pod_ready.go:94] pod "kube-apiserver-embed-certs-193865" is "Ready"
	I1227 20:55:27.339995  496630 pod_ready.go:86] duration metric: took 4.491624ms for pod "kube-apiserver-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:55:27.342274  496630 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:55:27.722353  496630 pod_ready.go:94] pod "kube-controller-manager-embed-certs-193865" is "Ready"
	I1227 20:55:27.722426  496630 pod_ready.go:86] duration metric: took 380.116721ms for pod "kube-controller-manager-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:55:27.922628  496630 pod_ready.go:83] waiting for pod "kube-proxy-5mf9z" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:55:28.321532  496630 pod_ready.go:94] pod "kube-proxy-5mf9z" is "Ready"
	I1227 20:55:28.321561  496630 pod_ready.go:86] duration metric: took 398.905877ms for pod "kube-proxy-5mf9z" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:55:28.523388  496630 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:55:28.922811  496630 pod_ready.go:94] pod "kube-scheduler-embed-certs-193865" is "Ready"
	I1227 20:55:28.922846  496630 pod_ready.go:86] duration metric: took 399.432123ms for pod "kube-scheduler-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:55:28.922859  496630 pod_ready.go:40] duration metric: took 1.60491717s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:55:28.981586  496630 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 20:55:28.984779  496630 out.go:203] 
	W1227 20:55:28.988262  496630 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 20:55:28.991202  496630 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:55:28.995259  496630 out.go:179] * Done! kubectl is now configured to use "embed-certs-193865" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:55:26 embed-certs-193865 crio[837]: time="2025-12-27T20:55:26.45462272Z" level=info msg="Created container ac0ca654b774295ae8111c7d1aedfeda7c81f16877b9a14d74c9c2bb7ecec5fd: kube-system/coredns-7d764666f9-xj2kx/coredns" id=8091ebe2-343a-43f1-af4d-b435dd35e72c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:55:26 embed-certs-193865 crio[837]: time="2025-12-27T20:55:26.455506224Z" level=info msg="Starting container: ac0ca654b774295ae8111c7d1aedfeda7c81f16877b9a14d74c9c2bb7ecec5fd" id=bf69f46b-93f4-4b6d-9c87-a51fd2c25be0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:55:26 embed-certs-193865 crio[837]: time="2025-12-27T20:55:26.457129875Z" level=info msg="Started container" PID=1751 containerID=ac0ca654b774295ae8111c7d1aedfeda7c81f16877b9a14d74c9c2bb7ecec5fd description=kube-system/coredns-7d764666f9-xj2kx/coredns id=bf69f46b-93f4-4b6d-9c87-a51fd2c25be0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a24539ce3af65441cd76fc71f8a0aa9e0d97786ab843cf2e7d36387bc17fce9b
	Dec 27 20:55:29 embed-certs-193865 crio[837]: time="2025-12-27T20:55:29.507968766Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4f417def-7df1-419d-a90d-a726b3925d22 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:55:29 embed-certs-193865 crio[837]: time="2025-12-27T20:55:29.508054376Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:55:29 embed-certs-193865 crio[837]: time="2025-12-27T20:55:29.517337424Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:295eef260a88c5eeda1474e4ff01a4ee4a8c1d9754c56c8e85a8f8a32dd2fc45 UID:5e4582d7-6a89-4582-a1c2-98e78bb9f0d2 NetNS:/var/run/netns/ddd11bf0-a301-46b9-b153-a276e1c70694 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012de38}] Aliases:map[]}"
	Dec 27 20:55:29 embed-certs-193865 crio[837]: time="2025-12-27T20:55:29.51737483Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 27 20:55:29 embed-certs-193865 crio[837]: time="2025-12-27T20:55:29.526141732Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:295eef260a88c5eeda1474e4ff01a4ee4a8c1d9754c56c8e85a8f8a32dd2fc45 UID:5e4582d7-6a89-4582-a1c2-98e78bb9f0d2 NetNS:/var/run/netns/ddd11bf0-a301-46b9-b153-a276e1c70694 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012de38}] Aliases:map[]}"
	Dec 27 20:55:29 embed-certs-193865 crio[837]: time="2025-12-27T20:55:29.52628339Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 27 20:55:29 embed-certs-193865 crio[837]: time="2025-12-27T20:55:29.529345606Z" level=info msg="Ran pod sandbox 295eef260a88c5eeda1474e4ff01a4ee4a8c1d9754c56c8e85a8f8a32dd2fc45 with infra container: default/busybox/POD" id=4f417def-7df1-419d-a90d-a726b3925d22 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:55:29 embed-certs-193865 crio[837]: time="2025-12-27T20:55:29.532040965Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3f1b6fb4-b84e-4a4d-bf50-79f366b50bd2 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:55:29 embed-certs-193865 crio[837]: time="2025-12-27T20:55:29.532335554Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3f1b6fb4-b84e-4a4d-bf50-79f366b50bd2 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:55:29 embed-certs-193865 crio[837]: time="2025-12-27T20:55:29.532468793Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=3f1b6fb4-b84e-4a4d-bf50-79f366b50bd2 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:55:29 embed-certs-193865 crio[837]: time="2025-12-27T20:55:29.535135107Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2d7e9680-3809-4c6a-a9f3-a3add4bba686 name=/runtime.v1.ImageService/PullImage
	Dec 27 20:55:29 embed-certs-193865 crio[837]: time="2025-12-27T20:55:29.538096107Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 20:55:31 embed-certs-193865 crio[837]: time="2025-12-27T20:55:31.438120751Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=2d7e9680-3809-4c6a-a9f3-a3add4bba686 name=/runtime.v1.ImageService/PullImage
	Dec 27 20:55:31 embed-certs-193865 crio[837]: time="2025-12-27T20:55:31.438966389Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7e06b22f-6ab7-49d7-a262-c0bd0ed2d15e name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:55:31 embed-certs-193865 crio[837]: time="2025-12-27T20:55:31.442129001Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=65fdc707-0fb5-459e-b590-254576adbb5d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:55:31 embed-certs-193865 crio[837]: time="2025-12-27T20:55:31.447692752Z" level=info msg="Creating container: default/busybox/busybox" id=7d1fc716-8391-42bf-b796-24d8aeb96a21 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:55:31 embed-certs-193865 crio[837]: time="2025-12-27T20:55:31.447824038Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:55:31 embed-certs-193865 crio[837]: time="2025-12-27T20:55:31.452640101Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:55:31 embed-certs-193865 crio[837]: time="2025-12-27T20:55:31.45320994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:55:31 embed-certs-193865 crio[837]: time="2025-12-27T20:55:31.46770744Z" level=info msg="Created container bbdf2accf9d1c33d2b3c9ff631431fc3a859940801d832b7cd2185e1b93e301d: default/busybox/busybox" id=7d1fc716-8391-42bf-b796-24d8aeb96a21 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:55:31 embed-certs-193865 crio[837]: time="2025-12-27T20:55:31.470158473Z" level=info msg="Starting container: bbdf2accf9d1c33d2b3c9ff631431fc3a859940801d832b7cd2185e1b93e301d" id=9361d93b-1bd2-4109-b480-8f43a6416069 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:55:31 embed-certs-193865 crio[837]: time="2025-12-27T20:55:31.47219026Z" level=info msg="Started container" PID=1805 containerID=bbdf2accf9d1c33d2b3c9ff631431fc3a859940801d832b7cd2185e1b93e301d description=default/busybox/busybox id=9361d93b-1bd2-4109-b480-8f43a6416069 name=/runtime.v1.RuntimeService/StartContainer sandboxID=295eef260a88c5eeda1474e4ff01a4ee4a8c1d9754c56c8e85a8f8a32dd2fc45
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	bbdf2accf9d1c       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   295eef260a88c       busybox                                      default
	ac0ca654b7742       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                      12 seconds ago      Running             coredns                   0                   a24539ce3af65       coredns-7d764666f9-xj2kx                     kube-system
	ec405c41fc3c7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   bcde51bcecbcd       storage-provisioner                          kube-system
	0bf16afaf46b0       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    23 seconds ago      Running             kindnet-cni               0                   7c67045862b8d       kindnet-fqnrt                                kube-system
	89c053a48e5a4       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                      25 seconds ago      Running             kube-proxy                0                   82f1dedb809f2       kube-proxy-5mf9z                             kube-system
	8b253e30e7fd9       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                      36 seconds ago      Running             kube-controller-manager   0                   572e2914ac15a       kube-controller-manager-embed-certs-193865   kube-system
	d71748b26d6a2       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                      36 seconds ago      Running             kube-scheduler            0                   c3ca9c479fdea       kube-scheduler-embed-certs-193865            kube-system
	822edcfa7883f       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                      36 seconds ago      Running             kube-apiserver            0                   dfcfe1d85ec4b       kube-apiserver-embed-certs-193865            kube-system
	f947abbad9da0       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                      36 seconds ago      Running             etcd                      0                   f0a26659793c5       etcd-embed-certs-193865                      kube-system
	
	
	==> coredns [ac0ca654b774295ae8111c7d1aedfeda7c81f16877b9a14d74c9c2bb7ecec5fd] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:59257 - 46858 "HINFO IN 6950969085112962230.7540785734866813276. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012448392s
	
	
	==> describe nodes <==
	Name:               embed-certs-193865
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-193865
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=embed-certs-193865
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_55_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:55:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-193865
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:55:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:55:26 +0000   Sat, 27 Dec 2025 20:55:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:55:26 +0000   Sat, 27 Dec 2025 20:55:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:55:26 +0000   Sat, 27 Dec 2025 20:55:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:55:26 +0000   Sat, 27 Dec 2025 20:55:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-193865
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                5669f867-44f3-47ed-a81f-7695205dabf5
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-7d764666f9-xj2kx                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     25s
	  kube-system                 etcd-embed-certs-193865                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         30s
	  kube-system                 kindnet-fqnrt                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-embed-certs-193865             250m (12%)    0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-embed-certs-193865    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-5mf9z                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-embed-certs-193865             100m (5%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node embed-certs-193865 event: Registered Node embed-certs-193865 in Controller
	
	
	==> dmesg <==
	[Dec27 20:22] overlayfs: idmapped layers are currently not supported
	[Dec27 20:23] overlayfs: idmapped layers are currently not supported
	[Dec27 20:24] overlayfs: idmapped layers are currently not supported
	[Dec27 20:25] overlayfs: idmapped layers are currently not supported
	[ +35.447549] overlayfs: idmapped layers are currently not supported
	[Dec27 20:26] overlayfs: idmapped layers are currently not supported
	[Dec27 20:27] overlayfs: idmapped layers are currently not supported
	[  +6.770645] overlayfs: idmapped layers are currently not supported
	[Dec27 20:28] overlayfs: idmapped layers are currently not supported
	[ +25.872751] overlayfs: idmapped layers are currently not supported
	[Dec27 20:29] overlayfs: idmapped layers are currently not supported
	[ +32.997137] overlayfs: idmapped layers are currently not supported
	[Dec27 20:31] overlayfs: idmapped layers are currently not supported
	[Dec27 20:33] overlayfs: idmapped layers are currently not supported
	[ +33.772475] overlayfs: idmapped layers are currently not supported
	[Dec27 20:39] overlayfs: idmapped layers are currently not supported
	[Dec27 20:40] overlayfs: idmapped layers are currently not supported
	[Dec27 20:44] overlayfs: idmapped layers are currently not supported
	[Dec27 20:45] overlayfs: idmapped layers are currently not supported
	[Dec27 20:49] overlayfs: idmapped layers are currently not supported
	[Dec27 20:50] overlayfs: idmapped layers are currently not supported
	[Dec27 20:51] overlayfs: idmapped layers are currently not supported
	[Dec27 20:52] overlayfs: idmapped layers are currently not supported
	[Dec27 20:53] overlayfs: idmapped layers are currently not supported
	[Dec27 20:55] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f947abbad9da06f62ee0ee0f7db97a079759bc20bd9b4f39a7cb489715cab75a] <==
	{"level":"info","ts":"2025-12-27T20:55:02.305570Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T20:55:02.365539Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T20:55:02.365647Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T20:55:02.365728Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-27T20:55:02.365802Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:55:02.365850Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:55:02.373517Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:55:02.373626Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:55:02.373652Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-27T20:55:02.373662Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:55:02.382627Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-193865 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:55:02.382888Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:55:02.385507Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:55:02.385758Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:55:02.386674Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:55:02.391560Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T20:55:02.387398Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:55:02.387741Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:55:02.389526Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:55:02.433598Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:55:02.437505Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:55:02.406371Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:55:02.406849Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:55:02.438677Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T20:55:02.438984Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	
	
	==> kernel <==
	 20:55:38 up  2:38,  0 user,  load average: 0.62, 1.27, 1.66
	Linux embed-certs-193865 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0bf16afaf46b07d6b91a7858a4d0b6e09a0fe530bfda63c4cf5b0b0b747651c5] <==
	I1227 20:55:15.432962       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:55:15.433175       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 20:55:15.433302       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:55:15.433319       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:55:15.433331       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:55:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:55:15.634190       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:55:15.634254       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:55:15.636694       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:55:15.636867       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 20:55:15.836824       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:55:15.836909       1 metrics.go:72] Registering metrics
	I1227 20:55:15.836992       1 controller.go:711] "Syncing nftables rules"
	I1227 20:55:25.634179       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:55:25.634257       1 main.go:301] handling current node
	I1227 20:55:35.634158       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:55:35.634222       1 main.go:301] handling current node
	
	
	==> kube-apiserver [822edcfa7883f468c64deddab9bc398a4183ec5258359cc5039e57793a1d1ddf] <==
	I1227 20:55:05.220894       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:05.221091       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:05.231655       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:55:05.274285       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:55:05.274414       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 20:55:05.299169       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:55:05.315488       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:55:05.948092       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 20:55:05.954287       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 20:55:05.954308       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:55:06.693711       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:55:06.758448       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:55:06.854206       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 20:55:06.862439       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1227 20:55:06.863583       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:55:06.868632       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:55:07.051880       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:55:07.918291       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:55:07.937088       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 20:55:07.951526       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 20:55:12.691322       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:55:12.700407       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:55:12.832253       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1227 20:55:13.011992       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1227 20:55:37.324197       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:47372: use of closed network connection
	
	
	==> kube-controller-manager [8b253e30e7fd958f985cc09f5aefa12889727bf20aa295af10f70dfb756c4c8f] <==
	I1227 20:55:11.869727       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:11.869736       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:11.869743       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:11.869753       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:11.869761       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:11.869767       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:11.869778       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:11.869809       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:11.869821       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:11.869828       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:11.869835       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:11.869843       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:11.869850       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:11.880892       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:11.865498       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:11.865625       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:11.865725       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:11.869661       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:11.898223       1 range_allocator.go:433] "Set node PodCIDR" node="embed-certs-193865" podCIDRs=["10.244.0.0/24"]
	I1227 20:55:11.899178       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:55:11.967332       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:11.967424       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:55:11.967453       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:55:12.000016       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:26.863179       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [89c053a48e5a4e16b0afde6852b21b88795da942edb33e6a2fce2dd2d3fbf523] <==
	I1227 20:55:13.459836       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:55:13.567357       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:55:13.668303       1 shared_informer.go:377] "Caches are synced"
	I1227 20:55:13.673786       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 20:55:13.673935       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:55:13.756270       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:55:13.756321       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:55:13.794033       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:55:13.794340       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:55:13.794364       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:55:13.795725       1 config.go:200] "Starting service config controller"
	I1227 20:55:13.795749       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:55:13.795766       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:55:13.801819       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:55:13.801889       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:55:13.801896       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:55:13.802549       1 config.go:309] "Starting node config controller"
	I1227 20:55:13.802557       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:55:13.802564       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:55:13.897542       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:55:13.902776       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 20:55:13.902818       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d71748b26d6a2db1e35bfb7205bf121b4f8cb5e84840d4d0e2b170f9c8a19b3c] <==
	E1227 20:55:05.234348       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 20:55:05.234456       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:55:05.234569       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:55:05.234660       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:55:05.234743       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 20:55:05.234826       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:55:05.234918       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 20:55:05.235107       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 20:55:05.235208       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 20:55:05.235334       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:55:05.235414       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:55:05.235512       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 20:55:05.235582       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:55:05.235641       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:55:05.246606       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 20:55:06.053485       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 20:55:06.189708       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:55:06.247763       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:55:06.308095       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:55:06.310983       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 20:55:06.341274       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:55:06.390703       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:55:06.439053       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 20:55:06.458768       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	I1227 20:55:06.852466       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:55:13 embed-certs-193865 kubelet[1286]: I1227 20:55:13.033339    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c7bfa55-35a8-4519-8282-2bd750cbc449-lib-modules\") pod \"kube-proxy-5mf9z\" (UID: \"2c7bfa55-35a8-4519-8282-2bd750cbc449\") " pod="kube-system/kube-proxy-5mf9z"
	Dec 27 20:55:13 embed-certs-193865 kubelet[1286]: I1227 20:55:13.033357    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f652890-8212-487a-a479-ac54591d0db0-lib-modules\") pod \"kindnet-fqnrt\" (UID: \"6f652890-8212-487a-a479-ac54591d0db0\") " pod="kube-system/kindnet-fqnrt"
	Dec 27 20:55:13 embed-certs-193865 kubelet[1286]: I1227 20:55:13.033381    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mhw8\" (UniqueName: \"kubernetes.io/projected/2c7bfa55-35a8-4519-8282-2bd750cbc449-kube-api-access-7mhw8\") pod \"kube-proxy-5mf9z\" (UID: \"2c7bfa55-35a8-4519-8282-2bd750cbc449\") " pod="kube-system/kube-proxy-5mf9z"
	Dec 27 20:55:13 embed-certs-193865 kubelet[1286]: I1227 20:55:13.151758    1286 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 27 20:55:13 embed-certs-193865 kubelet[1286]: W1227 20:55:13.275517    1286 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9/crio-7c67045862b8d8da17df97d3da74ee7e2c5fcca78ecb80ff6c041746c23d7540 WatchSource:0}: Error finding container 7c67045862b8d8da17df97d3da74ee7e2c5fcca78ecb80ff6c041746c23d7540: Status 404 returned error can't find the container with id 7c67045862b8d8da17df97d3da74ee7e2c5fcca78ecb80ff6c041746c23d7540
	Dec 27 20:55:14 embed-certs-193865 kubelet[1286]: E1227 20:55:14.550039    1286 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-193865" containerName="kube-scheduler"
	Dec 27 20:55:14 embed-certs-193865 kubelet[1286]: I1227 20:55:14.567375    1286 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-5mf9z" podStartSLOduration=2.567355409 podStartE2EDuration="2.567355409s" podCreationTimestamp="2025-12-27 20:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:55:13.98623573 +0000 UTC m=+6.232094533" watchObservedRunningTime="2025-12-27 20:55:14.567355409 +0000 UTC m=+6.813214228"
	Dec 27 20:55:15 embed-certs-193865 kubelet[1286]: E1227 20:55:15.452080    1286 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-193865" containerName="kube-apiserver"
	Dec 27 20:55:19 embed-certs-193865 kubelet[1286]: E1227 20:55:19.605663    1286 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-193865" containerName="kube-controller-manager"
	Dec 27 20:55:19 embed-certs-193865 kubelet[1286]: I1227 20:55:19.619107    1286 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-fqnrt" podStartSLOduration=5.606955159 podStartE2EDuration="7.619091879s" podCreationTimestamp="2025-12-27 20:55:12 +0000 UTC" firstStartedPulling="2025-12-27 20:55:13.286906423 +0000 UTC m=+5.532765226" lastFinishedPulling="2025-12-27 20:55:15.299043144 +0000 UTC m=+7.544901946" observedRunningTime="2025-12-27 20:55:15.975560753 +0000 UTC m=+8.221419556" watchObservedRunningTime="2025-12-27 20:55:19.619091879 +0000 UTC m=+11.864950690"
	Dec 27 20:55:20 embed-certs-193865 kubelet[1286]: E1227 20:55:20.771639    1286 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-193865" containerName="etcd"
	Dec 27 20:55:24 embed-certs-193865 kubelet[1286]: E1227 20:55:24.558320    1286 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-193865" containerName="kube-scheduler"
	Dec 27 20:55:25 embed-certs-193865 kubelet[1286]: E1227 20:55:25.462333    1286 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-193865" containerName="kube-apiserver"
	Dec 27 20:55:26 embed-certs-193865 kubelet[1286]: I1227 20:55:26.002335    1286 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 27 20:55:26 embed-certs-193865 kubelet[1286]: I1227 20:55:26.127464    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffbmh\" (UniqueName: \"kubernetes.io/projected/eaf08c7a-30b7-4c09-a98e-4b9be46b8f8d-kube-api-access-ffbmh\") pod \"storage-provisioner\" (UID: \"eaf08c7a-30b7-4c09-a98e-4b9be46b8f8d\") " pod="kube-system/storage-provisioner"
	Dec 27 20:55:26 embed-certs-193865 kubelet[1286]: I1227 20:55:26.127522    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb4db36b-a468-42ed-a57d-07d66fd3677f-config-volume\") pod \"coredns-7d764666f9-xj2kx\" (UID: \"bb4db36b-a468-42ed-a57d-07d66fd3677f\") " pod="kube-system/coredns-7d764666f9-xj2kx"
	Dec 27 20:55:26 embed-certs-193865 kubelet[1286]: I1227 20:55:26.127550    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhhc7\" (UniqueName: \"kubernetes.io/projected/bb4db36b-a468-42ed-a57d-07d66fd3677f-kube-api-access-fhhc7\") pod \"coredns-7d764666f9-xj2kx\" (UID: \"bb4db36b-a468-42ed-a57d-07d66fd3677f\") " pod="kube-system/coredns-7d764666f9-xj2kx"
	Dec 27 20:55:26 embed-certs-193865 kubelet[1286]: I1227 20:55:26.127574    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/eaf08c7a-30b7-4c09-a98e-4b9be46b8f8d-tmp\") pod \"storage-provisioner\" (UID: \"eaf08c7a-30b7-4c09-a98e-4b9be46b8f8d\") " pod="kube-system/storage-provisioner"
	Dec 27 20:55:26 embed-certs-193865 kubelet[1286]: W1227 20:55:26.391897    1286 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9/crio-a24539ce3af65441cd76fc71f8a0aa9e0d97786ab843cf2e7d36387bc17fce9b WatchSource:0}: Error finding container a24539ce3af65441cd76fc71f8a0aa9e0d97786ab843cf2e7d36387bc17fce9b: Status 404 returned error can't find the container with id a24539ce3af65441cd76fc71f8a0aa9e0d97786ab843cf2e7d36387bc17fce9b
	Dec 27 20:55:26 embed-certs-193865 kubelet[1286]: E1227 20:55:26.984181    1286 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-xj2kx" containerName="coredns"
	Dec 27 20:55:27 embed-certs-193865 kubelet[1286]: I1227 20:55:27.015744    1286 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.015727036 podStartE2EDuration="14.015727036s" podCreationTimestamp="2025-12-27 20:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:55:26.999194385 +0000 UTC m=+19.245053188" watchObservedRunningTime="2025-12-27 20:55:27.015727036 +0000 UTC m=+19.261585839"
	Dec 27 20:55:27 embed-certs-193865 kubelet[1286]: E1227 20:55:27.986814    1286 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-xj2kx" containerName="coredns"
	Dec 27 20:55:28 embed-certs-193865 kubelet[1286]: E1227 20:55:28.988568    1286 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-xj2kx" containerName="coredns"
	Dec 27 20:55:29 embed-certs-193865 kubelet[1286]: I1227 20:55:29.198214    1286 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-xj2kx" podStartSLOduration=16.198191646 podStartE2EDuration="16.198191646s" podCreationTimestamp="2025-12-27 20:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:55:27.017379134 +0000 UTC m=+19.263237936" watchObservedRunningTime="2025-12-27 20:55:29.198191646 +0000 UTC m=+21.444050449"
	Dec 27 20:55:29 embed-certs-193865 kubelet[1286]: I1227 20:55:29.248394    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l45st\" (UniqueName: \"kubernetes.io/projected/5e4582d7-6a89-4582-a1c2-98e78bb9f0d2-kube-api-access-l45st\") pod \"busybox\" (UID: \"5e4582d7-6a89-4582-a1c2-98e78bb9f0d2\") " pod="default/busybox"
	
	
	==> storage-provisioner [ec405c41fc3c79ff1dde609e83c9f5ffcbe3149afb2f761cfb59f8a0fcbb7e84] <==
	I1227 20:55:26.426630       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 20:55:26.443282       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:55:26.444131       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 20:55:26.447399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:55:26.461320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:55:26.461597       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:55:26.462223       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"973a1ef1-d110-4815-b972-77baf58b2ed2", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-193865_f1064cf2-daa1-455c-9459-b5c2a00c4a52 became leader
	I1227 20:55:26.463758       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-193865_f1064cf2-daa1-455c-9459-b5c2a00c4a52!
	W1227 20:55:26.467710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:55:26.485148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:55:26.573961       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-193865_f1064cf2-daa1-455c-9459-b5c2a00c4a52!
	W1227 20:55:28.488499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:55:28.493435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:55:30.496778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:55:30.501175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:55:32.504127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:55:32.508418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:55:34.511585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:55:34.516436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:55:36.519840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:55:36.524213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:55:38.527746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:55:38.534487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-193865 -n embed-certs-193865
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-193865 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-193865 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-193865 --alsologtostderr -v=1: exit status 80 (2.389884475s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-193865 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:56:54.201561  503138 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:56:54.201802  503138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:56:54.201833  503138 out.go:374] Setting ErrFile to fd 2...
	I1227 20:56:54.201854  503138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:56:54.202201  503138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:56:54.202582  503138 out.go:368] Setting JSON to false
	I1227 20:56:54.202636  503138 mustload.go:66] Loading cluster: embed-certs-193865
	I1227 20:56:54.203058  503138 config.go:182] Loaded profile config "embed-certs-193865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:56:54.203583  503138 cli_runner.go:164] Run: docker container inspect embed-certs-193865 --format={{.State.Status}}
	I1227 20:56:54.220772  503138 host.go:66] Checking if "embed-certs-193865" exists ...
	I1227 20:56:54.221076  503138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:56:54.277368  503138 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-27 20:56:54.267874917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:56:54.278143  503138 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22332/minikube-v1.37.0-1766811082-22332-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766811082-22332/minikube-v1.37.0-1766811082-22332-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766811082-22332-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:embed-certs-193865 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(boo
l=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 20:56:54.281907  503138 out.go:179] * Pausing node embed-certs-193865 ... 
	I1227 20:56:54.284873  503138 host.go:66] Checking if "embed-certs-193865" exists ...
	I1227 20:56:54.285207  503138 ssh_runner.go:195] Run: systemctl --version
	I1227 20:56:54.285252  503138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:56:54.301603  503138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:56:54.399954  503138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:56:54.414171  503138 pause.go:52] kubelet running: true
	I1227 20:56:54.414266  503138 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:56:54.660578  503138 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:56:54.660660  503138 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:56:54.771267  503138 cri.go:96] found id: "be8b934ba2d3ac38f4d68377967b793ddd2b8910b8768fcc498117296522c796"
	I1227 20:56:54.771292  503138 cri.go:96] found id: "4af7ead68be14764fdb90b14930b698a11839ca32ca4aad38127b0a1c26f10ea"
	I1227 20:56:54.771297  503138 cri.go:96] found id: "1727e2655810dd7761bc82b6eccd9673976f5a22e959a68cb5eb1c718414ce6d"
	I1227 20:56:54.771308  503138 cri.go:96] found id: "ca8c3de3fdc21beb2e56b12111a476cd88bb9d76e087d7bc994a71989d012ece"
	I1227 20:56:54.771311  503138 cri.go:96] found id: "da7a6ea56aa7b1cc7394b633af80fcf03ded1031c60eee45e624da67ab4f23e0"
	I1227 20:56:54.771315  503138 cri.go:96] found id: "3dfb4788db04d24ff921ca961d74a35736ceb9dcb271f67d3eef434cef1c7725"
	I1227 20:56:54.771318  503138 cri.go:96] found id: "4e5cabfe80bde33d172c974ffd714e8d551a86c345273a6f54f995aca0fd5be9"
	I1227 20:56:54.771321  503138 cri.go:96] found id: "e6ca226eab1fb21c9058d6260555c9c845dbc797687277be4d78f9bff45c68ae"
	I1227 20:56:54.771323  503138 cri.go:96] found id: "042eda9613782dfa323700aa0d06a99229b8b2dd3a00161d5be2ccee081daeb7"
	I1227 20:56:54.771329  503138 cri.go:96] found id: "8aa96fff450baf4425feed3f8caffc607682e228422def93111eb887ed139977"
	I1227 20:56:54.771359  503138 cri.go:96] found id: "82f9c7926d9e00ef4eee7b452a712b6517c6239daad7d110dbea66322be1a9fe"
	I1227 20:56:54.771371  503138 cri.go:96] found id: ""
	I1227 20:56:54.771422  503138 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:56:54.789606  503138 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:56:54Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:56:55.095105  503138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:56:55.109415  503138 pause.go:52] kubelet running: false
	I1227 20:56:55.109526  503138 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:56:55.284689  503138 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:56:55.284772  503138 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:56:55.361415  503138 cri.go:96] found id: "be8b934ba2d3ac38f4d68377967b793ddd2b8910b8768fcc498117296522c796"
	I1227 20:56:55.361440  503138 cri.go:96] found id: "4af7ead68be14764fdb90b14930b698a11839ca32ca4aad38127b0a1c26f10ea"
	I1227 20:56:55.361514  503138 cri.go:96] found id: "1727e2655810dd7761bc82b6eccd9673976f5a22e959a68cb5eb1c718414ce6d"
	I1227 20:56:55.361524  503138 cri.go:96] found id: "ca8c3de3fdc21beb2e56b12111a476cd88bb9d76e087d7bc994a71989d012ece"
	I1227 20:56:55.361528  503138 cri.go:96] found id: "da7a6ea56aa7b1cc7394b633af80fcf03ded1031c60eee45e624da67ab4f23e0"
	I1227 20:56:55.361536  503138 cri.go:96] found id: "3dfb4788db04d24ff921ca961d74a35736ceb9dcb271f67d3eef434cef1c7725"
	I1227 20:56:55.361539  503138 cri.go:96] found id: "4e5cabfe80bde33d172c974ffd714e8d551a86c345273a6f54f995aca0fd5be9"
	I1227 20:56:55.361542  503138 cri.go:96] found id: "e6ca226eab1fb21c9058d6260555c9c845dbc797687277be4d78f9bff45c68ae"
	I1227 20:56:55.361545  503138 cri.go:96] found id: "042eda9613782dfa323700aa0d06a99229b8b2dd3a00161d5be2ccee081daeb7"
	I1227 20:56:55.361557  503138 cri.go:96] found id: "8aa96fff450baf4425feed3f8caffc607682e228422def93111eb887ed139977"
	I1227 20:56:55.361567  503138 cri.go:96] found id: "82f9c7926d9e00ef4eee7b452a712b6517c6239daad7d110dbea66322be1a9fe"
	I1227 20:56:55.361570  503138 cri.go:96] found id: ""
	I1227 20:56:55.361622  503138 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:56:55.709016  503138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:56:55.721967  503138 pause.go:52] kubelet running: false
	I1227 20:56:55.722032  503138 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:56:55.879134  503138 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:56:55.879237  503138 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:56:55.944080  503138 cri.go:96] found id: "be8b934ba2d3ac38f4d68377967b793ddd2b8910b8768fcc498117296522c796"
	I1227 20:56:55.944104  503138 cri.go:96] found id: "4af7ead68be14764fdb90b14930b698a11839ca32ca4aad38127b0a1c26f10ea"
	I1227 20:56:55.944110  503138 cri.go:96] found id: "1727e2655810dd7761bc82b6eccd9673976f5a22e959a68cb5eb1c718414ce6d"
	I1227 20:56:55.944114  503138 cri.go:96] found id: "ca8c3de3fdc21beb2e56b12111a476cd88bb9d76e087d7bc994a71989d012ece"
	I1227 20:56:55.944117  503138 cri.go:96] found id: "da7a6ea56aa7b1cc7394b633af80fcf03ded1031c60eee45e624da67ab4f23e0"
	I1227 20:56:55.944121  503138 cri.go:96] found id: "3dfb4788db04d24ff921ca961d74a35736ceb9dcb271f67d3eef434cef1c7725"
	I1227 20:56:55.944124  503138 cri.go:96] found id: "4e5cabfe80bde33d172c974ffd714e8d551a86c345273a6f54f995aca0fd5be9"
	I1227 20:56:55.944127  503138 cri.go:96] found id: "e6ca226eab1fb21c9058d6260555c9c845dbc797687277be4d78f9bff45c68ae"
	I1227 20:56:55.944132  503138 cri.go:96] found id: "042eda9613782dfa323700aa0d06a99229b8b2dd3a00161d5be2ccee081daeb7"
	I1227 20:56:55.944144  503138 cri.go:96] found id: "8aa96fff450baf4425feed3f8caffc607682e228422def93111eb887ed139977"
	I1227 20:56:55.944150  503138 cri.go:96] found id: "82f9c7926d9e00ef4eee7b452a712b6517c6239daad7d110dbea66322be1a9fe"
	I1227 20:56:55.944154  503138 cri.go:96] found id: ""
	I1227 20:56:55.944208  503138 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:56:56.266840  503138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:56:56.279612  503138 pause.go:52] kubelet running: false
	I1227 20:56:56.279685  503138 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:56:56.441532  503138 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:56:56.441653  503138 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:56:56.512631  503138 cri.go:96] found id: "be8b934ba2d3ac38f4d68377967b793ddd2b8910b8768fcc498117296522c796"
	I1227 20:56:56.512703  503138 cri.go:96] found id: "4af7ead68be14764fdb90b14930b698a11839ca32ca4aad38127b0a1c26f10ea"
	I1227 20:56:56.512727  503138 cri.go:96] found id: "1727e2655810dd7761bc82b6eccd9673976f5a22e959a68cb5eb1c718414ce6d"
	I1227 20:56:56.512749  503138 cri.go:96] found id: "ca8c3de3fdc21beb2e56b12111a476cd88bb9d76e087d7bc994a71989d012ece"
	I1227 20:56:56.512775  503138 cri.go:96] found id: "da7a6ea56aa7b1cc7394b633af80fcf03ded1031c60eee45e624da67ab4f23e0"
	I1227 20:56:56.512798  503138 cri.go:96] found id: "3dfb4788db04d24ff921ca961d74a35736ceb9dcb271f67d3eef434cef1c7725"
	I1227 20:56:56.512820  503138 cri.go:96] found id: "4e5cabfe80bde33d172c974ffd714e8d551a86c345273a6f54f995aca0fd5be9"
	I1227 20:56:56.512843  503138 cri.go:96] found id: "e6ca226eab1fb21c9058d6260555c9c845dbc797687277be4d78f9bff45c68ae"
	I1227 20:56:56.512872  503138 cri.go:96] found id: "042eda9613782dfa323700aa0d06a99229b8b2dd3a00161d5be2ccee081daeb7"
	I1227 20:56:56.512896  503138 cri.go:96] found id: "8aa96fff450baf4425feed3f8caffc607682e228422def93111eb887ed139977"
	I1227 20:56:56.512941  503138 cri.go:96] found id: "82f9c7926d9e00ef4eee7b452a712b6517c6239daad7d110dbea66322be1a9fe"
	I1227 20:56:56.512967  503138 cri.go:96] found id: ""
	I1227 20:56:56.513039  503138 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:56:56.528648  503138 out.go:203] 
	W1227 20:56:56.531442  503138 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:56:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:56:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 20:56:56.531477  503138 out.go:285] * 
	* 
	W1227 20:56:56.534966  503138 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:56:56.538017  503138 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-193865 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-193865
helpers_test.go:244: (dbg) docker inspect embed-certs-193865:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9",
	        "Created": "2025-12-27T20:54:52.231777017Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 500553,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:55:52.164842249Z",
	            "FinishedAt": "2025-12-27T20:55:51.341514375Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9/hostname",
	        "HostsPath": "/var/lib/docker/containers/910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9/hosts",
	        "LogPath": "/var/lib/docker/containers/910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9/910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9-json.log",
	        "Name": "/embed-certs-193865",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-193865:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-193865",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9",
	                "LowerDir": "/var/lib/docker/overlay2/fdd0efe955c9b2f82090fe0b88aba3b05df41490a2ac55c7669ec25ea57da42f-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdd0efe955c9b2f82090fe0b88aba3b05df41490a2ac55c7669ec25ea57da42f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdd0efe955c9b2f82090fe0b88aba3b05df41490a2ac55c7669ec25ea57da42f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdd0efe955c9b2f82090fe0b88aba3b05df41490a2ac55c7669ec25ea57da42f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-193865",
	                "Source": "/var/lib/docker/volumes/embed-certs-193865/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-193865",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-193865",
	                "name.minikube.sigs.k8s.io": "embed-certs-193865",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "29956075df58cec02773902d0bc62bbfbb0ef700cff6861f8c4646851ef90ecf",
	            "SandboxKey": "/var/run/docker/netns/29956075df58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-193865": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:9f:25:e6:87:14",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "58b23b0ff82a7c2d13a32fdf89113eb222c2e15062269f5db64ae246b28bdf6b",
	                    "EndpointID": "8541ce09d99ee2fb7c55f72489f84b19051d2f8214d5e8b6b32346b883bc0fe7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-193865",
	                        "910081dd96e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-193865 -n embed-certs-193865
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-193865 -n embed-certs-193865: exit status 2 (327.285577ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-193865 logs -n 25
E1227 20:56:57.018339  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-193865 logs -n 25: (1.259183861s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-855707 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-855707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │                     │
	│ stop    │ -p old-k8s-version-855707 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │ 27 Dec 25 20:51 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-855707 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:51 UTC │ 27 Dec 25 20:51 UTC │
	│ start   │ -p old-k8s-version-855707 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:51 UTC │ 27 Dec 25 20:52 UTC │
	│ image   │ old-k8s-version-855707 image list --format=json                                                                                                                                                                                               │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ pause   │ -p old-k8s-version-855707 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │                     │
	│ delete  │ -p old-k8s-version-855707                                                                                                                                                                                                                     │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ delete  │ -p old-k8s-version-855707                                                                                                                                                                                                                     │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ start   │ -p default-k8s-diff-port-058924 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:53 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-058924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-058924 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-058924 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
	│ start   │ -p default-k8s-diff-port-058924 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:54 UTC │
	│ image   │ default-k8s-diff-port-058924 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ pause   │ -p default-k8s-diff-port-058924 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-058924                                                                                                                                                                                                               │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ delete  │ -p default-k8s-diff-port-058924                                                                                                                                                                                                               │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ start   │ -p embed-certs-193865 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-193865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │                     │
	│ stop    │ -p embed-certs-193865 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │ 27 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p embed-certs-193865 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │ 27 Dec 25 20:55 UTC │
	│ start   │ -p embed-certs-193865 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │ 27 Dec 25 20:56 UTC │
	│ image   │ embed-certs-193865 image list --format=json                                                                                                                                                                                                   │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:56 UTC │ 27 Dec 25 20:56 UTC │
	│ pause   │ -p embed-certs-193865 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:55:51
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:55:51.893360  500426 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:55:51.893504  500426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:55:51.893514  500426 out.go:374] Setting ErrFile to fd 2...
	I1227 20:55:51.893520  500426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:55:51.893759  500426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:55:51.894104  500426 out.go:368] Setting JSON to false
	I1227 20:55:51.894929  500426 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9504,"bootTime":1766859448,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:55:51.894998  500426 start.go:143] virtualization:  
	I1227 20:55:51.899957  500426 out.go:179] * [embed-certs-193865] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:55:51.903011  500426 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:55:51.903144  500426 notify.go:221] Checking for updates...
	I1227 20:55:51.908863  500426 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:55:51.911820  500426 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:55:51.915150  500426 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:55:51.918081  500426 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:55:51.920957  500426 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:55:51.924286  500426 config.go:182] Loaded profile config "embed-certs-193865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:55:51.924880  500426 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:55:51.957560  500426 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:55:51.957703  500426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:55:52.015154  500426 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:55:52.004054917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:55:52.015272  500426 docker.go:319] overlay module found
	I1227 20:55:52.018404  500426 out.go:179] * Using the docker driver based on existing profile
	I1227 20:55:52.021364  500426 start.go:309] selected driver: docker
	I1227 20:55:52.021390  500426 start.go:928] validating driver "docker" against &{Name:embed-certs-193865 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-193865 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:55:52.021650  500426 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:55:52.022425  500426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:55:52.079562  500426 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:55:52.06921667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:55:52.079951  500426 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:55:52.079986  500426 cni.go:84] Creating CNI manager for ""
	I1227 20:55:52.080044  500426 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:55:52.080086  500426 start.go:353] cluster config:
	{Name:embed-certs-193865 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-193865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:55:52.083436  500426 out.go:179] * Starting "embed-certs-193865" primary control-plane node in "embed-certs-193865" cluster
	I1227 20:55:52.086313  500426 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:55:52.089361  500426 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:55:52.092315  500426 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:55:52.092368  500426 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:55:52.092392  500426 cache.go:65] Caching tarball of preloaded images
	I1227 20:55:52.092410  500426 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:55:52.092483  500426 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:55:52.092495  500426 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:55:52.092615  500426 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/config.json ...
	I1227 20:55:52.113419  500426 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:55:52.113468  500426 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:55:52.113489  500426 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:55:52.113522  500426 start.go:360] acquireMachinesLock for embed-certs-193865: {Name:mkc50e87a609f0ebbab428159240cc886136162f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:55:52.113597  500426 start.go:364] duration metric: took 45.685µs to acquireMachinesLock for "embed-certs-193865"
	I1227 20:55:52.113620  500426 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:55:52.113632  500426 fix.go:54] fixHost starting: 
	I1227 20:55:52.113902  500426 cli_runner.go:164] Run: docker container inspect embed-certs-193865 --format={{.State.Status}}
	I1227 20:55:52.130839  500426 fix.go:112] recreateIfNeeded on embed-certs-193865: state=Stopped err=<nil>
	W1227 20:55:52.130881  500426 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:55:52.134048  500426 out.go:252] * Restarting existing docker container for "embed-certs-193865" ...
	I1227 20:55:52.134138  500426 cli_runner.go:164] Run: docker start embed-certs-193865
	I1227 20:55:52.390288  500426 cli_runner.go:164] Run: docker container inspect embed-certs-193865 --format={{.State.Status}}
	I1227 20:55:52.412991  500426 kic.go:430] container "embed-certs-193865" state is running.
	I1227 20:55:52.413376  500426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-193865
	I1227 20:55:52.435839  500426 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/config.json ...
	I1227 20:55:52.437000  500426 machine.go:94] provisionDockerMachine start ...
	I1227 20:55:52.437067  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:52.457615  500426 main.go:144] libmachine: Using SSH client type: native
	I1227 20:55:52.457942  500426 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1227 20:55:52.457951  500426 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:55:52.458718  500426 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39556->127.0.0.1:33433: read: connection reset by peer
	I1227 20:55:55.604947  500426 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-193865
	
	I1227 20:55:55.604971  500426 ubuntu.go:182] provisioning hostname "embed-certs-193865"
	I1227 20:55:55.605039  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:55.622956  500426 main.go:144] libmachine: Using SSH client type: native
	I1227 20:55:55.623262  500426 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1227 20:55:55.623277  500426 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-193865 && echo "embed-certs-193865" | sudo tee /etc/hostname
	I1227 20:55:55.770626  500426 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-193865
	
	I1227 20:55:55.770734  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:55.790101  500426 main.go:144] libmachine: Using SSH client type: native
	I1227 20:55:55.790416  500426 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1227 20:55:55.790438  500426 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-193865' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-193865/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-193865' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:55:55.925685  500426 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:55:55.925728  500426 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:55:55.925755  500426 ubuntu.go:190] setting up certificates
	I1227 20:55:55.925763  500426 provision.go:84] configureAuth start
	I1227 20:55:55.925826  500426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-193865
	I1227 20:55:55.942837  500426 provision.go:143] copyHostCerts
	I1227 20:55:55.942902  500426 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:55:55.942924  500426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:55:55.943006  500426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:55:55.943116  500426 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:55:55.943128  500426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:55:55.943156  500426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:55:55.943258  500426 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:55:55.943269  500426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:55:55.943294  500426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:55:55.943355  500426 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.embed-certs-193865 san=[127.0.0.1 192.168.76.2 embed-certs-193865 localhost minikube]
	I1227 20:55:56.228230  500426 provision.go:177] copyRemoteCerts
	I1227 20:55:56.228297  500426 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:55:56.228335  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:56.247040  500426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:55:56.345188  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:55:56.361774  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:55:56.379193  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:55:56.396215  500426 provision.go:87] duration metric: took 470.429035ms to configureAuth
	I1227 20:55:56.396241  500426 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:55:56.396435  500426 config.go:182] Loaded profile config "embed-certs-193865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:55:56.396540  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:56.413749  500426 main.go:144] libmachine: Using SSH client type: native
	I1227 20:55:56.414061  500426 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1227 20:55:56.414075  500426 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:55:56.753114  500426 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:55:56.753135  500426 machine.go:97] duration metric: took 4.316118872s to provisionDockerMachine
	I1227 20:55:56.753146  500426 start.go:293] postStartSetup for "embed-certs-193865" (driver="docker")
	I1227 20:55:56.753156  500426 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:55:56.753216  500426 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:55:56.753265  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:56.776129  500426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:55:56.873193  500426 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:55:56.876513  500426 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:55:56.876540  500426 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:55:56.876553  500426 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:55:56.876605  500426 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:55:56.876695  500426 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:55:56.876805  500426 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:55:56.884128  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:55:56.900846  500426 start.go:296] duration metric: took 147.684789ms for postStartSetup
	I1227 20:55:56.900922  500426 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:55:56.900976  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:56.918367  500426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:55:57.016289  500426 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:55:57.021746  500426 fix.go:56] duration metric: took 4.908107567s for fixHost
	I1227 20:55:57.021773  500426 start.go:83] releasing machines lock for "embed-certs-193865", held for 4.908164698s
	I1227 20:55:57.021856  500426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-193865
	I1227 20:55:57.038644  500426 ssh_runner.go:195] Run: cat /version.json
	I1227 20:55:57.038703  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:57.038723  500426 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:55:57.038792  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:57.059097  500426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:55:57.068037  500426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:55:57.245339  500426 ssh_runner.go:195] Run: systemctl --version
	I1227 20:55:57.251754  500426 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:55:57.286602  500426 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:55:57.290878  500426 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:55:57.290956  500426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:55:57.298478  500426 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:55:57.298510  500426 start.go:496] detecting cgroup driver to use...
	I1227 20:55:57.298548  500426 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:55:57.298598  500426 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:55:57.313341  500426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:55:57.326359  500426 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:55:57.326430  500426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:55:57.341698  500426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:55:57.354529  500426 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:55:57.471131  500426 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:55:57.607584  500426 docker.go:234] disabling docker service ...
	I1227 20:55:57.607662  500426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:55:57.625277  500426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:55:57.638144  500426 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:55:57.759703  500426 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:55:57.876264  500426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:55:57.889434  500426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:55:57.903173  500426 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:55:57.903286  500426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:55:57.911566  500426 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:55:57.911716  500426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:55:57.919996  500426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:55:57.928101  500426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:55:57.936591  500426 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:55:57.944466  500426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:55:57.952931  500426 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:55:57.960884  500426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:55:57.969246  500426 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:55:57.976771  500426 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:55:57.984026  500426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:55:58.107374  500426 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:55:58.276289  500426 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:55:58.276428  500426 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:55:58.280828  500426 start.go:574] Will wait 60s for crictl version
	I1227 20:55:58.280892  500426 ssh_runner.go:195] Run: which crictl
	I1227 20:55:58.284605  500426 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:55:58.309151  500426 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:55:58.309355  500426 ssh_runner.go:195] Run: crio --version
	I1227 20:55:58.338041  500426 ssh_runner.go:195] Run: crio --version
	I1227 20:55:58.369037  500426 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:55:58.371866  500426 cli_runner.go:164] Run: docker network inspect embed-certs-193865 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:55:58.387528  500426 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 20:55:58.391369  500426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:55:58.400668  500426 kubeadm.go:884] updating cluster {Name:embed-certs-193865 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-193865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:55:58.400781  500426 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:55:58.400831  500426 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:55:58.437161  500426 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:55:58.437184  500426 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:55:58.437238  500426 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:55:58.466919  500426 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:55:58.466943  500426 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:55:58.466951  500426 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 20:55:58.467054  500426 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-193865 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-193865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:55:58.467145  500426 ssh_runner.go:195] Run: crio config
	I1227 20:55:58.538758  500426 cni.go:84] Creating CNI manager for ""
	I1227 20:55:58.538784  500426 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:55:58.538812  500426 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:55:58.538837  500426 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-193865 NodeName:embed-certs-193865 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:55:58.539003  500426 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-193865"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:55:58.539088  500426 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:55:58.546857  500426 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:55:58.546927  500426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:55:58.554316  500426 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1227 20:55:58.567149  500426 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:55:58.579742  500426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1227 20:55:58.592042  500426 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:55:58.595801  500426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:55:58.605015  500426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:55:58.714015  500426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:55:58.730248  500426 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865 for IP: 192.168.76.2
	I1227 20:55:58.730280  500426 certs.go:195] generating shared ca certs ...
	I1227 20:55:58.730296  500426 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:55:58.730559  500426 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:55:58.730656  500426 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:55:58.730688  500426 certs.go:257] generating profile certs ...
	I1227 20:55:58.730867  500426 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/client.key
	I1227 20:55:58.731006  500426 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/apiserver.key.b049a295
	I1227 20:55:58.731070  500426 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/proxy-client.key
	I1227 20:55:58.731244  500426 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:55:58.731316  500426 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:55:58.731337  500426 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:55:58.731391  500426 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:55:58.731458  500426 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:55:58.731510  500426 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:55:58.731590  500426 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:55:58.732267  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:55:58.756624  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:55:58.780509  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:55:58.801936  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:55:58.824671  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1227 20:55:58.849341  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 20:55:58.885908  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:55:58.909115  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 20:55:58.928415  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:55:58.948081  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:55:58.970390  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:55:58.991142  500426 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:55:59.004873  500426 ssh_runner.go:195] Run: openssl version
	I1227 20:55:59.012370  500426 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:55:59.020139  500426 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:55:59.027605  500426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:55:59.031198  500426 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:55:59.031269  500426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:55:59.072328  500426 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:55:59.079711  500426 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:55:59.087050  500426 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:55:59.094369  500426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:55:59.098419  500426 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:55:59.098487  500426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:55:59.139302  500426 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:55:59.146837  500426 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:55:59.154188  500426 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:55:59.162142  500426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:55:59.167234  500426 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:55:59.167320  500426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:55:59.208752  500426 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:55:59.217172  500426 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:55:59.221587  500426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:55:59.265760  500426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:55:59.306781  500426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:55:59.347583  500426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:55:59.394387  500426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:55:59.443657  500426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:55:59.499094  500426 kubeadm.go:401] StartCluster: {Name:embed-certs-193865 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-193865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:55:59.499191  500426 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:55:59.499249  500426 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:55:59.542440  500426 cri.go:96] found id: "e6ca226eab1fb21c9058d6260555c9c845dbc797687277be4d78f9bff45c68ae"
	I1227 20:55:59.542463  500426 cri.go:96] found id: "042eda9613782dfa323700aa0d06a99229b8b2dd3a00161d5be2ccee081daeb7"
	I1227 20:55:59.542468  500426 cri.go:96] found id: ""
	I1227 20:55:59.542540  500426 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:55:59.573933  500426 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:55:59Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:55:59.574010  500426 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:55:59.589375  500426 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:55:59.589393  500426 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:55:59.589485  500426 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:55:59.601907  500426 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:55:59.602293  500426 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-193865" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:55:59.602388  500426 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-272475/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-193865" cluster setting kubeconfig missing "embed-certs-193865" context setting]
	I1227 20:55:59.602686  500426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:55:59.603807  500426 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:55:59.617607  500426 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 20:55:59.617639  500426 kubeadm.go:602] duration metric: took 28.240718ms to restartPrimaryControlPlane
	I1227 20:55:59.617649  500426 kubeadm.go:403] duration metric: took 118.56788ms to StartCluster
	I1227 20:55:59.617664  500426 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:55:59.617736  500426 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:55:59.619157  500426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:55:59.619791  500426 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:55:59.620702  500426 config.go:182] Loaded profile config "embed-certs-193865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:55:59.620756  500426 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:55:59.620924  500426 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-193865"
	I1227 20:55:59.620948  500426 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-193865"
	W1227 20:55:59.620964  500426 addons.go:248] addon storage-provisioner should already be in state true
	I1227 20:55:59.620986  500426 host.go:66] Checking if "embed-certs-193865" exists ...
	I1227 20:55:59.621574  500426 cli_runner.go:164] Run: docker container inspect embed-certs-193865 --format={{.State.Status}}
	I1227 20:55:59.621786  500426 addons.go:70] Setting dashboard=true in profile "embed-certs-193865"
	I1227 20:55:59.621813  500426 addons.go:239] Setting addon dashboard=true in "embed-certs-193865"
	W1227 20:55:59.621820  500426 addons.go:248] addon dashboard should already be in state true
	I1227 20:55:59.621844  500426 host.go:66] Checking if "embed-certs-193865" exists ...
	I1227 20:55:59.622375  500426 cli_runner.go:164] Run: docker container inspect embed-certs-193865 --format={{.State.Status}}
	I1227 20:55:59.624690  500426 addons.go:70] Setting default-storageclass=true in profile "embed-certs-193865"
	I1227 20:55:59.624750  500426 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-193865"
	I1227 20:55:59.631687  500426 out.go:179] * Verifying Kubernetes components...
	I1227 20:55:59.633025  500426 cli_runner.go:164] Run: docker container inspect embed-certs-193865 --format={{.State.Status}}
	I1227 20:55:59.635583  500426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:55:59.681668  500426 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 20:55:59.684700  500426 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:55:59.687629  500426 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:55:59.687652  500426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:55:59.687721  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:59.687934  500426 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 20:55:59.691523  500426 addons.go:239] Setting addon default-storageclass=true in "embed-certs-193865"
	W1227 20:55:59.691549  500426 addons.go:248] addon default-storageclass should already be in state true
	I1227 20:55:59.691572  500426 host.go:66] Checking if "embed-certs-193865" exists ...
	I1227 20:55:59.692037  500426 cli_runner.go:164] Run: docker container inspect embed-certs-193865 --format={{.State.Status}}
	I1227 20:55:59.692433  500426 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 20:55:59.692456  500426 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 20:55:59.692503  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:59.737419  500426 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:55:59.737440  500426 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:55:59.737534  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:59.753577  500426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:55:59.768175  500426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:55:59.784074  500426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:55:59.952072  500426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:55:59.955025  500426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:56:00.084411  500426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:56:00.122201  500426 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 20:56:00.122230  500426 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 20:56:00.213206  500426 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:56:00.213233  500426 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:56:00.294003  500426 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:56:00.294086  500426 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:56:00.350901  500426 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:56:00.350927  500426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:56:00.371697  500426 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:56:00.371792  500426 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:56:00.400511  500426 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:56:00.400603  500426 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:56:00.486751  500426 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:56:00.486839  500426 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:56:00.511202  500426 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:56:00.511280  500426 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:56:00.530085  500426 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:56:00.530167  500426 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:56:00.554074  500426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:56:04.199287  500426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.247133339s)
	I1227 20:56:04.199345  500426 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.244298922s)
	I1227 20:56:04.199377  500426 node_ready.go:35] waiting up to 6m0s for node "embed-certs-193865" to be "Ready" ...
	I1227 20:56:04.199691  500426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.115251169s)
	I1227 20:56:04.253208  500426 node_ready.go:49] node "embed-certs-193865" is "Ready"
	I1227 20:56:04.253285  500426 node_ready.go:38] duration metric: took 53.889485ms for node "embed-certs-193865" to be "Ready" ...
	I1227 20:56:04.253313  500426 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:56:04.253397  500426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:56:04.367113  500426 api_server.go:72] duration metric: took 4.747283511s to wait for apiserver process to appear ...
	I1227 20:56:04.367138  500426 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:56:04.367159  500426 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:56:04.367541  500426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.813374429s)
	I1227 20:56:04.378569  500426 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-193865 addons enable metrics-server
	
	I1227 20:56:04.382738  500426 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1227 20:56:04.383462  500426 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:56:04.383484  500426 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:56:04.386467  500426 addons.go:530] duration metric: took 4.765712745s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1227 20:56:04.868101  500426 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:56:04.876068  500426 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 20:56:04.877160  500426 api_server.go:141] control plane version: v1.35.0
	I1227 20:56:04.877217  500426 api_server.go:131] duration metric: took 510.070791ms to wait for apiserver health ...
	I1227 20:56:04.877241  500426 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:56:04.880954  500426 system_pods.go:59] 8 kube-system pods found
	I1227 20:56:04.880995  500426 system_pods.go:61] "coredns-7d764666f9-xj2kx" [bb4db36b-a468-42ed-a57d-07d66fd3677f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:56:04.881006  500426 system_pods.go:61] "etcd-embed-certs-193865" [189ff29b-8bc4-4c48-8fd1-32e246482296] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:56:04.881021  500426 system_pods.go:61] "kindnet-fqnrt" [6f652890-8212-487a-a479-ac54591d0db0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:56:04.881029  500426 system_pods.go:61] "kube-apiserver-embed-certs-193865" [e71ae5e1-2d4c-413e-990a-9c9b539d62fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:56:04.881038  500426 system_pods.go:61] "kube-controller-manager-embed-certs-193865" [ad24b721-376b-410f-8d96-c7b80585aa44] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:56:04.881045  500426 system_pods.go:61] "kube-proxy-5mf9z" [2c7bfa55-35a8-4519-8282-2bd750cbc449] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:56:04.881061  500426 system_pods.go:61] "kube-scheduler-embed-certs-193865" [c0d62251-0ec8-4ac3-a396-5574e3a92155] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:56:04.881068  500426 system_pods.go:61] "storage-provisioner" [eaf08c7a-30b7-4c09-a98e-4b9be46b8f8d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:56:04.881075  500426 system_pods.go:74] duration metric: took 3.81418ms to wait for pod list to return data ...
	I1227 20:56:04.881088  500426 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:56:04.883771  500426 default_sa.go:45] found service account: "default"
	I1227 20:56:04.883800  500426 default_sa.go:55] duration metric: took 2.706584ms for default service account to be created ...
	I1227 20:56:04.883810  500426 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:56:04.886457  500426 system_pods.go:86] 8 kube-system pods found
	I1227 20:56:04.886493  500426 system_pods.go:89] "coredns-7d764666f9-xj2kx" [bb4db36b-a468-42ed-a57d-07d66fd3677f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:56:04.886503  500426 system_pods.go:89] "etcd-embed-certs-193865" [189ff29b-8bc4-4c48-8fd1-32e246482296] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:56:04.886522  500426 system_pods.go:89] "kindnet-fqnrt" [6f652890-8212-487a-a479-ac54591d0db0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:56:04.886534  500426 system_pods.go:89] "kube-apiserver-embed-certs-193865" [e71ae5e1-2d4c-413e-990a-9c9b539d62fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:56:04.886542  500426 system_pods.go:89] "kube-controller-manager-embed-certs-193865" [ad24b721-376b-410f-8d96-c7b80585aa44] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:56:04.886552  500426 system_pods.go:89] "kube-proxy-5mf9z" [2c7bfa55-35a8-4519-8282-2bd750cbc449] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:56:04.886559  500426 system_pods.go:89] "kube-scheduler-embed-certs-193865" [c0d62251-0ec8-4ac3-a396-5574e3a92155] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:56:04.886569  500426 system_pods.go:89] "storage-provisioner" [eaf08c7a-30b7-4c09-a98e-4b9be46b8f8d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:56:04.886575  500426 system_pods.go:126] duration metric: took 2.759998ms to wait for k8s-apps to be running ...
	I1227 20:56:04.886582  500426 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:56:04.886636  500426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:56:04.927495  500426 system_svc.go:56] duration metric: took 40.901685ms WaitForService to wait for kubelet
	I1227 20:56:04.927569  500426 kubeadm.go:587] duration metric: took 5.307742723s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:56:04.927620  500426 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:56:04.932672  500426 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:56:04.932748  500426 node_conditions.go:123] node cpu capacity is 2
	I1227 20:56:04.932775  500426 node_conditions.go:105] duration metric: took 5.136285ms to run NodePressure ...
	I1227 20:56:04.932803  500426 start.go:242] waiting for startup goroutines ...
	I1227 20:56:04.932835  500426 start.go:247] waiting for cluster config update ...
	I1227 20:56:04.932867  500426 start.go:256] writing updated cluster config ...
	I1227 20:56:04.933194  500426 ssh_runner.go:195] Run: rm -f paused
	I1227 20:56:04.937107  500426 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:56:04.940537  500426 pod_ready.go:83] waiting for pod "coredns-7d764666f9-xj2kx" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 20:56:06.947448  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:08.948579  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:11.446726  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:13.947114  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:16.446484  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:18.446845  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:20.947128  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:23.447074  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:25.945434  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:27.946117  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:30.446262  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:32.946176  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:35.446104  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:37.946848  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:40.445532  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	I1227 20:56:40.945313  500426 pod_ready.go:94] pod "coredns-7d764666f9-xj2kx" is "Ready"
	I1227 20:56:40.945341  500426 pod_ready.go:86] duration metric: took 36.004752161s for pod "coredns-7d764666f9-xj2kx" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:40.947871  500426 pod_ready.go:83] waiting for pod "etcd-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:40.952126  500426 pod_ready.go:94] pod "etcd-embed-certs-193865" is "Ready"
	I1227 20:56:40.952153  500426 pod_ready.go:86] duration metric: took 4.254545ms for pod "etcd-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:40.954269  500426 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:40.959618  500426 pod_ready.go:94] pod "kube-apiserver-embed-certs-193865" is "Ready"
	I1227 20:56:40.959651  500426 pod_ready.go:86] duration metric: took 5.358901ms for pod "kube-apiserver-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:40.961844  500426 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:41.143910  500426 pod_ready.go:94] pod "kube-controller-manager-embed-certs-193865" is "Ready"
	I1227 20:56:41.143940  500426 pod_ready.go:86] duration metric: took 182.073077ms for pod "kube-controller-manager-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:41.343860  500426 pod_ready.go:83] waiting for pod "kube-proxy-5mf9z" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:41.743631  500426 pod_ready.go:94] pod "kube-proxy-5mf9z" is "Ready"
	I1227 20:56:41.743662  500426 pod_ready.go:86] duration metric: took 399.772644ms for pod "kube-proxy-5mf9z" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:41.943936  500426 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:42.344573  500426 pod_ready.go:94] pod "kube-scheduler-embed-certs-193865" is "Ready"
	I1227 20:56:42.344603  500426 pod_ready.go:86] duration metric: took 400.639363ms for pod "kube-scheduler-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:42.344616  500426 pod_ready.go:40] duration metric: took 37.407451484s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:56:42.400279  500426 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 20:56:42.403909  500426 out.go:203] 
	W1227 20:56:42.407005  500426 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 20:56:42.410080  500426 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:56:42.413213  500426 out.go:179] * Done! kubectl is now configured to use "embed-certs-193865" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.641170864Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.645747124Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.645882324Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.645914356Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.651099781Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.65113475Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.651156099Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.654780647Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.654815198Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.654839206Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.658694935Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.658723365Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:56:49 embed-certs-193865 crio[654]: time="2025-12-27T20:56:49.955146623Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=165ccd99-a8d2-460b-a867-24f20b090613 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:56:49 embed-certs-193865 crio[654]: time="2025-12-27T20:56:49.956090843Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cec001e0-b48e-4b81-ab61-f096979e315a name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:56:49 embed-certs-193865 crio[654]: time="2025-12-27T20:56:49.957064396Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz/dashboard-metrics-scraper" id=5378f628-ecc0-4468-a610-da9f3dc2e3cb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:56:49 embed-certs-193865 crio[654]: time="2025-12-27T20:56:49.957157152Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:56:49 embed-certs-193865 crio[654]: time="2025-12-27T20:56:49.963879577Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:56:49 embed-certs-193865 crio[654]: time="2025-12-27T20:56:49.96567494Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:56:49 embed-certs-193865 crio[654]: time="2025-12-27T20:56:49.982176829Z" level=info msg="Created container 8aa96fff450baf4425feed3f8caffc607682e228422def93111eb887ed139977: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz/dashboard-metrics-scraper" id=5378f628-ecc0-4468-a610-da9f3dc2e3cb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:56:49 embed-certs-193865 crio[654]: time="2025-12-27T20:56:49.983934335Z" level=info msg="Starting container: 8aa96fff450baf4425feed3f8caffc607682e228422def93111eb887ed139977" id=046410d8-d74c-4ce9-8f9f-c41e703688b5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:56:49 embed-certs-193865 crio[654]: time="2025-12-27T20:56:49.985999665Z" level=info msg="Started container" PID=1742 containerID=8aa96fff450baf4425feed3f8caffc607682e228422def93111eb887ed139977 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz/dashboard-metrics-scraper id=046410d8-d74c-4ce9-8f9f-c41e703688b5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8a5e13fe6839f416e5fd7e9050cdd4c63bb5e95f56df123e41a15c33aee91e8b
	Dec 27 20:56:49 embed-certs-193865 conmon[1740]: conmon 8aa96fff450baf4425fe <ninfo>: container 1742 exited with status 1
	Dec 27 20:56:50 embed-certs-193865 crio[654]: time="2025-12-27T20:56:50.236721755Z" level=info msg="Removing container: c358c768b3adb42665b80284fa9c986928f9b4359bd0429fba18e82301a41512" id=bf153da0-e2b6-472c-b9da-e5a4ddf43369 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:56:50 embed-certs-193865 crio[654]: time="2025-12-27T20:56:50.266181137Z" level=info msg="Error loading conmon cgroup of container c358c768b3adb42665b80284fa9c986928f9b4359bd0429fba18e82301a41512: cgroup deleted" id=bf153da0-e2b6-472c-b9da-e5a4ddf43369 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:56:50 embed-certs-193865 crio[654]: time="2025-12-27T20:56:50.272570229Z" level=info msg="Removed container c358c768b3adb42665b80284fa9c986928f9b4359bd0429fba18e82301a41512: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz/dashboard-metrics-scraper" id=bf153da0-e2b6-472c-b9da-e5a4ddf43369 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8aa96fff450ba       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   8a5e13fe6839f       dashboard-metrics-scraper-867fb5f87b-p8jjz   kubernetes-dashboard
	be8b934ba2d3a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago      Running             storage-provisioner         2                   7bb55720f14d9       storage-provisioner                          kube-system
	82f9c7926d9e0       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago      Running             kubernetes-dashboard        0                   8dc5bfd7649f6       kubernetes-dashboard-b84665fb8-44qk4         kubernetes-dashboard
	de35335f8b6e8       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago      Running             busybox                     1                   d4b2ad768e893       busybox                                      default
	4af7ead68be14       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           53 seconds ago      Running             coredns                     1                   639ea4b96e9a3       coredns-7d764666f9-xj2kx                     kube-system
	1727e2655810d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago      Exited              storage-provisioner         1                   7bb55720f14d9       storage-provisioner                          kube-system
	ca8c3de3fdc21       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           53 seconds ago      Running             kindnet-cni                 1                   c15d8b555831e       kindnet-fqnrt                                kube-system
	da7a6ea56aa7b       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           53 seconds ago      Running             kube-proxy                  1                   15aa91376cda8       kube-proxy-5mf9z                             kube-system
	3dfb4788db04d       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           58 seconds ago      Running             kube-scheduler              1                   20eba3cdf6fa7       kube-scheduler-embed-certs-193865            kube-system
	4e5cabfe80bde       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           58 seconds ago      Running             kube-controller-manager     1                   95715ee181dbb       kube-controller-manager-embed-certs-193865   kube-system
	e6ca226eab1fb       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           58 seconds ago      Running             kube-apiserver              1                   29ab97bb8a6dd       kube-apiserver-embed-certs-193865            kube-system
	042eda9613782       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           58 seconds ago      Running             etcd                        1                   ee457a87f1344       etcd-embed-certs-193865                      kube-system
	
	
	==> coredns [4af7ead68be14764fdb90b14930b698a11839ca32ca4aad38127b0a1c26f10ea] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:59208 - 55852 "HINFO IN 8556797303068945170.8015881326220980466. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022152962s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-193865
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-193865
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=embed-certs-193865
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_55_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:55:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-193865
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:56:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:56:33 +0000   Sat, 27 Dec 2025 20:55:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:56:33 +0000   Sat, 27 Dec 2025 20:55:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:56:33 +0000   Sat, 27 Dec 2025 20:55:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:56:33 +0000   Sat, 27 Dec 2025 20:55:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-193865
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                5669f867-44f3-47ed-a81f-7695205dabf5
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-7d764666f9-xj2kx                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     104s
	  kube-system                 etcd-embed-certs-193865                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         109s
	  kube-system                 kindnet-fqnrt                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-embed-certs-193865             250m (12%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-embed-certs-193865    200m (10%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-5mf9z                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-embed-certs-193865             100m (5%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-p8jjz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-44qk4          0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  106s  node-controller  Node embed-certs-193865 event: Registered Node embed-certs-193865 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node embed-certs-193865 event: Registered Node embed-certs-193865 in Controller
	
	
	==> dmesg <==
	[Dec27 20:23] overlayfs: idmapped layers are currently not supported
	[Dec27 20:24] overlayfs: idmapped layers are currently not supported
	[Dec27 20:25] overlayfs: idmapped layers are currently not supported
	[ +35.447549] overlayfs: idmapped layers are currently not supported
	[Dec27 20:26] overlayfs: idmapped layers are currently not supported
	[Dec27 20:27] overlayfs: idmapped layers are currently not supported
	[  +6.770645] overlayfs: idmapped layers are currently not supported
	[Dec27 20:28] overlayfs: idmapped layers are currently not supported
	[ +25.872751] overlayfs: idmapped layers are currently not supported
	[Dec27 20:29] overlayfs: idmapped layers are currently not supported
	[ +32.997137] overlayfs: idmapped layers are currently not supported
	[Dec27 20:31] overlayfs: idmapped layers are currently not supported
	[Dec27 20:33] overlayfs: idmapped layers are currently not supported
	[ +33.772475] overlayfs: idmapped layers are currently not supported
	[Dec27 20:39] overlayfs: idmapped layers are currently not supported
	[Dec27 20:40] overlayfs: idmapped layers are currently not supported
	[Dec27 20:44] overlayfs: idmapped layers are currently not supported
	[Dec27 20:45] overlayfs: idmapped layers are currently not supported
	[Dec27 20:49] overlayfs: idmapped layers are currently not supported
	[Dec27 20:50] overlayfs: idmapped layers are currently not supported
	[Dec27 20:51] overlayfs: idmapped layers are currently not supported
	[Dec27 20:52] overlayfs: idmapped layers are currently not supported
	[Dec27 20:53] overlayfs: idmapped layers are currently not supported
	[Dec27 20:55] overlayfs: idmapped layers are currently not supported
	[ +57.272039] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [042eda9613782dfa323700aa0d06a99229b8b2dd3a00161d5be2ccee081daeb7] <==
	{"level":"info","ts":"2025-12-27T20:55:59.991482Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:55:59.991492Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:55:59.991684Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T20:55:59.991695Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T20:55:59.992723Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T20:55:59.992820Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T20:55:59.992880Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T20:56:00.285843Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:56:00.285997Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:56:00.286073Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:56:00.286116Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:56:00.286171Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:56:00.295859Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:56:00.296010Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:56:00.296067Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:56:00.296107Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:56:00.310150Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-193865 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:56:00.310461Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:56:00.310672Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:56:00.311755Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:56:00.324729Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:56:00.330307Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:56:00.339777Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:56:00.339849Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:56:00.412651Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 20:56:57 up  2:39,  0 user,  load average: 1.29, 1.29, 1.63
	Linux embed-certs-193865 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ca8c3de3fdc21beb2e56b12111a476cd88bb9d76e087d7bc994a71989d012ece] <==
	I1227 20:56:04.449911       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:56:04.450082       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 20:56:04.450199       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:56:04.450210       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:56:04.450221       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:56:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:56:04.637272       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:56:04.637350       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:56:04.637386       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:56:04.637838       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 20:56:34.637372       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1227 20:56:34.637383       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 20:56:34.638302       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 20:56:34.638314       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1227 20:56:35.838238       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:56:35.838267       1 metrics.go:72] Registering metrics
	I1227 20:56:35.838329       1 controller.go:711] "Syncing nftables rules"
	I1227 20:56:44.637346       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:56:44.637399       1 main.go:301] handling current node
	I1227 20:56:54.639656       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:56:54.639690       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e6ca226eab1fb21c9058d6260555c9c845dbc797687277be4d78f9bff45c68ae] <==
	I1227 20:56:03.178727       1 aggregator.go:187] initial CRD sync complete...
	I1227 20:56:03.178751       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 20:56:03.178757       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:56:03.178764       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:56:03.190269       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:56:03.202508       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:56:03.231639       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 20:56:03.231726       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 20:56:03.231770       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 20:56:03.231969       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 20:56:03.243756       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 20:56:03.245716       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:03.260991       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1227 20:56:03.302585       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 20:56:03.747080       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:56:03.792026       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:56:03.867595       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:56:03.872312       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:56:03.962958       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:56:03.993706       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:56:04.302401       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.90.159"}
	I1227 20:56:04.354531       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.184.114"}
	I1227 20:56:06.523021       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:56:06.774602       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:56:06.822543       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4e5cabfe80bde33d172c974ffd714e8d551a86c345273a6f54f995aca0fd5be9] <==
	I1227 20:56:06.136269       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.136489       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.136589       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.136796       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.136972       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.137098       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.138103       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.138579       1 range_allocator.go:177] "Sending events to api server"
	I1227 20:56:06.138663       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 20:56:06.138692       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:56:06.138722       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.138841       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.139045       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.139091       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.141900       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:56:06.142784       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.142978       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.143398       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.166699       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.229952       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.229978       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:56:06.229984       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:56:06.243979       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.829057       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1227 20:56:06.830736       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [da7a6ea56aa7b1cc7394b633af80fcf03ded1031c60eee45e624da67ab4f23e0] <==
	I1227 20:56:04.507744       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:56:04.618932       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:56:04.719946       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:04.719981       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 20:56:04.720063       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:56:04.742880       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:56:04.742929       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:56:04.747004       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:56:04.747299       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:56:04.747318       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:56:04.748382       1 config.go:200] "Starting service config controller"
	I1227 20:56:04.748400       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:56:04.751639       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:56:04.751726       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:56:04.751770       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:56:04.751797       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:56:04.752086       1 config.go:309] "Starting node config controller"
	I1227 20:56:04.752106       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:56:04.848575       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:56:04.851806       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 20:56:04.851947       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 20:56:04.852263       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [3dfb4788db04d24ff921ca961d74a35736ceb9dcb271f67d3eef434cef1c7725] <==
	I1227 20:56:01.350522       1 serving.go:386] Generated self-signed cert in-memory
	W1227 20:56:03.009101       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 20:56:03.012614       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 20:56:03.012707       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 20:56:03.012716       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 20:56:03.163928       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 20:56:03.168755       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:56:03.170980       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 20:56:03.171121       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:56:03.171132       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:56:03.171147       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 20:56:03.271584       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:56:17 embed-certs-193865 kubelet[781]: E1227 20:56:17.139558     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-44qk4" containerName="kubernetes-dashboard"
	Dec 27 20:56:17 embed-certs-193865 kubelet[781]: E1227 20:56:17.765675     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz" containerName="dashboard-metrics-scraper"
	Dec 27 20:56:17 embed-certs-193865 kubelet[781]: I1227 20:56:17.765724     781 scope.go:122] "RemoveContainer" containerID="55e33c20479d31e2b0cd7977c093d8bb81f5bcb1aa82240fc621bc4ced59e3dc"
	Dec 27 20:56:17 embed-certs-193865 kubelet[781]: E1227 20:56:17.765921     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-p8jjz_kubernetes-dashboard(bbc1d301-7652-4cfe-b1a7-d1b8e21d8c76)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz" podUID="bbc1d301-7652-4cfe-b1a7-d1b8e21d8c76"
	Dec 27 20:56:22 embed-certs-193865 kubelet[781]: E1227 20:56:22.954786     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz" containerName="dashboard-metrics-scraper"
	Dec 27 20:56:22 embed-certs-193865 kubelet[781]: I1227 20:56:22.954827     781 scope.go:122] "RemoveContainer" containerID="55e33c20479d31e2b0cd7977c093d8bb81f5bcb1aa82240fc621bc4ced59e3dc"
	Dec 27 20:56:23 embed-certs-193865 kubelet[781]: I1227 20:56:23.154067     781 scope.go:122] "RemoveContainer" containerID="55e33c20479d31e2b0cd7977c093d8bb81f5bcb1aa82240fc621bc4ced59e3dc"
	Dec 27 20:56:23 embed-certs-193865 kubelet[781]: E1227 20:56:23.154352     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz" containerName="dashboard-metrics-scraper"
	Dec 27 20:56:23 embed-certs-193865 kubelet[781]: I1227 20:56:23.154379     781 scope.go:122] "RemoveContainer" containerID="c358c768b3adb42665b80284fa9c986928f9b4359bd0429fba18e82301a41512"
	Dec 27 20:56:23 embed-certs-193865 kubelet[781]: E1227 20:56:23.154554     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-p8jjz_kubernetes-dashboard(bbc1d301-7652-4cfe-b1a7-d1b8e21d8c76)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz" podUID="bbc1d301-7652-4cfe-b1a7-d1b8e21d8c76"
	Dec 27 20:56:23 embed-certs-193865 kubelet[781]: I1227 20:56:23.171764     781 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-44qk4" podStartSLOduration=8.903759077 podStartE2EDuration="17.171750123s" podCreationTimestamp="2025-12-27 20:56:06 +0000 UTC" firstStartedPulling="2025-12-27 20:56:06.972575636 +0000 UTC m=+8.239529353" lastFinishedPulling="2025-12-27 20:56:15.240566683 +0000 UTC m=+16.507520399" observedRunningTime="2025-12-27 20:56:16.150867705 +0000 UTC m=+17.417821422" watchObservedRunningTime="2025-12-27 20:56:23.171750123 +0000 UTC m=+24.438703839"
	Dec 27 20:56:27 embed-certs-193865 kubelet[781]: E1227 20:56:27.766320     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz" containerName="dashboard-metrics-scraper"
	Dec 27 20:56:27 embed-certs-193865 kubelet[781]: I1227 20:56:27.766805     781 scope.go:122] "RemoveContainer" containerID="c358c768b3adb42665b80284fa9c986928f9b4359bd0429fba18e82301a41512"
	Dec 27 20:56:27 embed-certs-193865 kubelet[781]: E1227 20:56:27.767051     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-p8jjz_kubernetes-dashboard(bbc1d301-7652-4cfe-b1a7-d1b8e21d8c76)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz" podUID="bbc1d301-7652-4cfe-b1a7-d1b8e21d8c76"
	Dec 27 20:56:35 embed-certs-193865 kubelet[781]: I1227 20:56:35.184994     781 scope.go:122] "RemoveContainer" containerID="1727e2655810dd7761bc82b6eccd9673976f5a22e959a68cb5eb1c718414ce6d"
	Dec 27 20:56:40 embed-certs-193865 kubelet[781]: E1227 20:56:40.553118     781 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-xj2kx" containerName="coredns"
	Dec 27 20:56:49 embed-certs-193865 kubelet[781]: E1227 20:56:49.954456     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz" containerName="dashboard-metrics-scraper"
	Dec 27 20:56:49 embed-certs-193865 kubelet[781]: I1227 20:56:49.954506     781 scope.go:122] "RemoveContainer" containerID="c358c768b3adb42665b80284fa9c986928f9b4359bd0429fba18e82301a41512"
	Dec 27 20:56:50 embed-certs-193865 kubelet[781]: I1227 20:56:50.234698     781 scope.go:122] "RemoveContainer" containerID="c358c768b3adb42665b80284fa9c986928f9b4359bd0429fba18e82301a41512"
	Dec 27 20:56:50 embed-certs-193865 kubelet[781]: E1227 20:56:50.235008     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz" containerName="dashboard-metrics-scraper"
	Dec 27 20:56:50 embed-certs-193865 kubelet[781]: I1227 20:56:50.235037     781 scope.go:122] "RemoveContainer" containerID="8aa96fff450baf4425feed3f8caffc607682e228422def93111eb887ed139977"
	Dec 27 20:56:50 embed-certs-193865 kubelet[781]: E1227 20:56:50.235199     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-p8jjz_kubernetes-dashboard(bbc1d301-7652-4cfe-b1a7-d1b8e21d8c76)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz" podUID="bbc1d301-7652-4cfe-b1a7-d1b8e21d8c76"
	Dec 27 20:56:54 embed-certs-193865 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:56:54 embed-certs-193865 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:56:54 embed-certs-193865 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [82f9c7926d9e00ef4eee7b452a712b6517c6239daad7d110dbea66322be1a9fe] <==
	2025/12/27 20:56:15 Using namespace: kubernetes-dashboard
	2025/12/27 20:56:15 Using in-cluster config to connect to apiserver
	2025/12/27 20:56:15 Using secret token for csrf signing
	2025/12/27 20:56:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 20:56:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 20:56:15 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 20:56:15 Generating JWE encryption key
	2025/12/27 20:56:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 20:56:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 20:56:15 Initializing JWE encryption key from synchronized object
	2025/12/27 20:56:15 Creating in-cluster Sidecar client
	2025/12/27 20:56:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:56:15 Serving insecurely on HTTP port: 9090
	2025/12/27 20:56:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:56:15 Starting overwatch
	
	
	==> storage-provisioner [1727e2655810dd7761bc82b6eccd9673976f5a22e959a68cb5eb1c718414ce6d] <==
	I1227 20:56:04.428680       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 20:56:34.457813       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [be8b934ba2d3ac38f4d68377967b793ddd2b8910b8768fcc498117296522c796] <==
	I1227 20:56:35.255903       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 20:56:35.267747       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:56:35.267877       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 20:56:35.271262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:38.727070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:42.987439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:46.585490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:49.639048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:52.662570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:52.667326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:56:52.667470       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:56:52.667646       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-193865_a288591f-342f-47c1-b354-ee80038e80b3!
	I1227 20:56:52.673548       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"973a1ef1-d110-4815-b972-77baf58b2ed2", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-193865_a288591f-342f-47c1-b354-ee80038e80b3 became leader
	W1227 20:56:52.679420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:52.689677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:56:52.768031       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-193865_a288591f-342f-47c1-b354-ee80038e80b3!
	W1227 20:56:54.692726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:54.698185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:56.702295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:56.708150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-193865 -n embed-certs-193865
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-193865 -n embed-certs-193865: exit status 2 (374.180815ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-193865 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-193865
helpers_test.go:244: (dbg) docker inspect embed-certs-193865:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9",
	        "Created": "2025-12-27T20:54:52.231777017Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 500553,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:55:52.164842249Z",
	            "FinishedAt": "2025-12-27T20:55:51.341514375Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9/hostname",
	        "HostsPath": "/var/lib/docker/containers/910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9/hosts",
	        "LogPath": "/var/lib/docker/containers/910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9/910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9-json.log",
	        "Name": "/embed-certs-193865",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-193865:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-193865",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "910081dd96e2a5637f3b408a8057a7254f3b80b49d653ffba57b3de358a32ed9",
	                "LowerDir": "/var/lib/docker/overlay2/fdd0efe955c9b2f82090fe0b88aba3b05df41490a2ac55c7669ec25ea57da42f-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdd0efe955c9b2f82090fe0b88aba3b05df41490a2ac55c7669ec25ea57da42f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdd0efe955c9b2f82090fe0b88aba3b05df41490a2ac55c7669ec25ea57da42f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdd0efe955c9b2f82090fe0b88aba3b05df41490a2ac55c7669ec25ea57da42f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-193865",
	                "Source": "/var/lib/docker/volumes/embed-certs-193865/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-193865",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-193865",
	                "name.minikube.sigs.k8s.io": "embed-certs-193865",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "29956075df58cec02773902d0bc62bbfbb0ef700cff6861f8c4646851ef90ecf",
	            "SandboxKey": "/var/run/docker/netns/29956075df58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-193865": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:9f:25:e6:87:14",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "58b23b0ff82a7c2d13a32fdf89113eb222c2e15062269f5db64ae246b28bdf6b",
	                    "EndpointID": "8541ce09d99ee2fb7c55f72489f84b19051d2f8214d5e8b6b32346b883bc0fe7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-193865",
	                        "910081dd96e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-193865 -n embed-certs-193865
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-193865 -n embed-certs-193865: exit status 2 (337.112616ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-193865 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-193865 logs -n 25: (1.381672064s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-855707 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-855707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │                     │
	│ stop    │ -p old-k8s-version-855707 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │ 27 Dec 25 20:51 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-855707 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:51 UTC │ 27 Dec 25 20:51 UTC │
	│ start   │ -p old-k8s-version-855707 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:51 UTC │ 27 Dec 25 20:52 UTC │
	│ image   │ old-k8s-version-855707 image list --format=json                                                                                                                                                                                               │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ pause   │ -p old-k8s-version-855707 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │                     │
	│ delete  │ -p old-k8s-version-855707                                                                                                                                                                                                                     │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ delete  │ -p old-k8s-version-855707                                                                                                                                                                                                                     │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ start   │ -p default-k8s-diff-port-058924 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:53 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-058924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-058924 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-058924 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
	│ start   │ -p default-k8s-diff-port-058924 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:54 UTC │
	│ image   │ default-k8s-diff-port-058924 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ pause   │ -p default-k8s-diff-port-058924 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-058924                                                                                                                                                                                                               │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ delete  │ -p default-k8s-diff-port-058924                                                                                                                                                                                                               │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ start   │ -p embed-certs-193865 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-193865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │                     │
	│ stop    │ -p embed-certs-193865 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │ 27 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p embed-certs-193865 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │ 27 Dec 25 20:55 UTC │
	│ start   │ -p embed-certs-193865 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │ 27 Dec 25 20:56 UTC │
	│ image   │ embed-certs-193865 image list --format=json                                                                                                                                                                                                   │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:56 UTC │ 27 Dec 25 20:56 UTC │
	│ pause   │ -p embed-certs-193865 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:55:51
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:55:51.893360  500426 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:55:51.893504  500426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:55:51.893514  500426 out.go:374] Setting ErrFile to fd 2...
	I1227 20:55:51.893520  500426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:55:51.893759  500426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:55:51.894104  500426 out.go:368] Setting JSON to false
	I1227 20:55:51.894929  500426 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9504,"bootTime":1766859448,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:55:51.894998  500426 start.go:143] virtualization:  
	I1227 20:55:51.899957  500426 out.go:179] * [embed-certs-193865] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:55:51.903011  500426 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:55:51.903144  500426 notify.go:221] Checking for updates...
	I1227 20:55:51.908863  500426 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:55:51.911820  500426 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:55:51.915150  500426 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:55:51.918081  500426 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:55:51.920957  500426 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:55:51.924286  500426 config.go:182] Loaded profile config "embed-certs-193865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:55:51.924880  500426 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:55:51.957560  500426 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:55:51.957703  500426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:55:52.015154  500426 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:55:52.004054917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:55:52.015272  500426 docker.go:319] overlay module found
	I1227 20:55:52.018404  500426 out.go:179] * Using the docker driver based on existing profile
	I1227 20:55:52.021364  500426 start.go:309] selected driver: docker
	I1227 20:55:52.021390  500426 start.go:928] validating driver "docker" against &{Name:embed-certs-193865 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-193865 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:55:52.021650  500426 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:55:52.022425  500426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:55:52.079562  500426 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:55:52.06921667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:55:52.079951  500426 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:55:52.079986  500426 cni.go:84] Creating CNI manager for ""
	I1227 20:55:52.080044  500426 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:55:52.080086  500426 start.go:353] cluster config:
	{Name:embed-certs-193865 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-193865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:55:52.083436  500426 out.go:179] * Starting "embed-certs-193865" primary control-plane node in "embed-certs-193865" cluster
	I1227 20:55:52.086313  500426 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:55:52.089361  500426 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:55:52.092315  500426 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:55:52.092368  500426 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:55:52.092392  500426 cache.go:65] Caching tarball of preloaded images
	I1227 20:55:52.092410  500426 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:55:52.092483  500426 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:55:52.092495  500426 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:55:52.092615  500426 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/config.json ...
	I1227 20:55:52.113419  500426 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:55:52.113468  500426 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:55:52.113489  500426 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:55:52.113522  500426 start.go:360] acquireMachinesLock for embed-certs-193865: {Name:mkc50e87a609f0ebbab428159240cc886136162f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:55:52.113597  500426 start.go:364] duration metric: took 45.685µs to acquireMachinesLock for "embed-certs-193865"
	I1227 20:55:52.113620  500426 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:55:52.113632  500426 fix.go:54] fixHost starting: 
	I1227 20:55:52.113902  500426 cli_runner.go:164] Run: docker container inspect embed-certs-193865 --format={{.State.Status}}
	I1227 20:55:52.130839  500426 fix.go:112] recreateIfNeeded on embed-certs-193865: state=Stopped err=<nil>
	W1227 20:55:52.130881  500426 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:55:52.134048  500426 out.go:252] * Restarting existing docker container for "embed-certs-193865" ...
	I1227 20:55:52.134138  500426 cli_runner.go:164] Run: docker start embed-certs-193865
	I1227 20:55:52.390288  500426 cli_runner.go:164] Run: docker container inspect embed-certs-193865 --format={{.State.Status}}
	I1227 20:55:52.412991  500426 kic.go:430] container "embed-certs-193865" state is running.
	I1227 20:55:52.413376  500426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-193865
	I1227 20:55:52.435839  500426 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/config.json ...
	I1227 20:55:52.437000  500426 machine.go:94] provisionDockerMachine start ...
	I1227 20:55:52.437067  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:52.457615  500426 main.go:144] libmachine: Using SSH client type: native
	I1227 20:55:52.457942  500426 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1227 20:55:52.457951  500426 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:55:52.458718  500426 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39556->127.0.0.1:33433: read: connection reset by peer
	I1227 20:55:55.604947  500426 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-193865
	
	I1227 20:55:55.604971  500426 ubuntu.go:182] provisioning hostname "embed-certs-193865"
	I1227 20:55:55.605039  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:55.622956  500426 main.go:144] libmachine: Using SSH client type: native
	I1227 20:55:55.623262  500426 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1227 20:55:55.623277  500426 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-193865 && echo "embed-certs-193865" | sudo tee /etc/hostname
	I1227 20:55:55.770626  500426 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-193865
	
	I1227 20:55:55.770734  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:55.790101  500426 main.go:144] libmachine: Using SSH client type: native
	I1227 20:55:55.790416  500426 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1227 20:55:55.790438  500426 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-193865' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-193865/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-193865' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:55:55.925685  500426 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:55:55.925728  500426 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:55:55.925755  500426 ubuntu.go:190] setting up certificates
	I1227 20:55:55.925763  500426 provision.go:84] configureAuth start
	I1227 20:55:55.925826  500426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-193865
	I1227 20:55:55.942837  500426 provision.go:143] copyHostCerts
	I1227 20:55:55.942902  500426 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:55:55.942924  500426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:55:55.943006  500426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:55:55.943116  500426 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:55:55.943128  500426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:55:55.943156  500426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:55:55.943258  500426 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:55:55.943269  500426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:55:55.943294  500426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:55:55.943355  500426 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.embed-certs-193865 san=[127.0.0.1 192.168.76.2 embed-certs-193865 localhost minikube]
	I1227 20:55:56.228230  500426 provision.go:177] copyRemoteCerts
	I1227 20:55:56.228297  500426 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:55:56.228335  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:56.247040  500426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:55:56.345188  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:55:56.361774  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:55:56.379193  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:55:56.396215  500426 provision.go:87] duration metric: took 470.429035ms to configureAuth
	I1227 20:55:56.396241  500426 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:55:56.396435  500426 config.go:182] Loaded profile config "embed-certs-193865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:55:56.396540  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:56.413749  500426 main.go:144] libmachine: Using SSH client type: native
	I1227 20:55:56.414061  500426 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1227 20:55:56.414075  500426 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:55:56.753114  500426 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:55:56.753135  500426 machine.go:97] duration metric: took 4.316118872s to provisionDockerMachine
	I1227 20:55:56.753146  500426 start.go:293] postStartSetup for "embed-certs-193865" (driver="docker")
	I1227 20:55:56.753156  500426 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:55:56.753216  500426 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:55:56.753265  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:56.776129  500426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:55:56.873193  500426 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:55:56.876513  500426 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:55:56.876540  500426 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:55:56.876553  500426 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:55:56.876605  500426 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:55:56.876695  500426 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:55:56.876805  500426 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:55:56.884128  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:55:56.900846  500426 start.go:296] duration metric: took 147.684789ms for postStartSetup
	I1227 20:55:56.900922  500426 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:55:56.900976  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:56.918367  500426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:55:57.016289  500426 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:55:57.021746  500426 fix.go:56] duration metric: took 4.908107567s for fixHost
	I1227 20:55:57.021773  500426 start.go:83] releasing machines lock for "embed-certs-193865", held for 4.908164698s
	I1227 20:55:57.021856  500426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-193865
	I1227 20:55:57.038644  500426 ssh_runner.go:195] Run: cat /version.json
	I1227 20:55:57.038703  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:57.038723  500426 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:55:57.038792  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:57.059097  500426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:55:57.068037  500426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:55:57.245339  500426 ssh_runner.go:195] Run: systemctl --version
	I1227 20:55:57.251754  500426 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:55:57.286602  500426 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:55:57.290878  500426 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:55:57.290956  500426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:55:57.298478  500426 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:55:57.298510  500426 start.go:496] detecting cgroup driver to use...
	I1227 20:55:57.298548  500426 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:55:57.298598  500426 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:55:57.313341  500426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:55:57.326359  500426 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:55:57.326430  500426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:55:57.341698  500426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:55:57.354529  500426 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:55:57.471131  500426 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:55:57.607584  500426 docker.go:234] disabling docker service ...
	I1227 20:55:57.607662  500426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:55:57.625277  500426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:55:57.638144  500426 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:55:57.759703  500426 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:55:57.876264  500426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:55:57.889434  500426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:55:57.903173  500426 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:55:57.903286  500426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:55:57.911566  500426 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:55:57.911716  500426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:55:57.919996  500426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:55:57.928101  500426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:55:57.936591  500426 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:55:57.944466  500426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:55:57.952931  500426 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:55:57.960884  500426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:55:57.969246  500426 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:55:57.976771  500426 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:55:57.984026  500426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:55:58.107374  500426 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:55:58.276289  500426 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:55:58.276428  500426 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:55:58.280828  500426 start.go:574] Will wait 60s for crictl version
	I1227 20:55:58.280892  500426 ssh_runner.go:195] Run: which crictl
	I1227 20:55:58.284605  500426 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:55:58.309151  500426 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:55:58.309355  500426 ssh_runner.go:195] Run: crio --version
	I1227 20:55:58.338041  500426 ssh_runner.go:195] Run: crio --version
	I1227 20:55:58.369037  500426 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:55:58.371866  500426 cli_runner.go:164] Run: docker network inspect embed-certs-193865 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:55:58.387528  500426 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 20:55:58.391369  500426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:55:58.400668  500426 kubeadm.go:884] updating cluster {Name:embed-certs-193865 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-193865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:55:58.400781  500426 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:55:58.400831  500426 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:55:58.437161  500426 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:55:58.437184  500426 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:55:58.437238  500426 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:55:58.466919  500426 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:55:58.466943  500426 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:55:58.466951  500426 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 20:55:58.467054  500426 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-193865 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-193865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:55:58.467145  500426 ssh_runner.go:195] Run: crio config
	I1227 20:55:58.538758  500426 cni.go:84] Creating CNI manager for ""
	I1227 20:55:58.538784  500426 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:55:58.538812  500426 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:55:58.538837  500426 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-193865 NodeName:embed-certs-193865 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:55:58.539003  500426 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-193865"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:55:58.539088  500426 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:55:58.546857  500426 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:55:58.546927  500426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:55:58.554316  500426 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1227 20:55:58.567149  500426 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:55:58.579742  500426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1227 20:55:58.592042  500426 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:55:58.595801  500426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:55:58.605015  500426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:55:58.714015  500426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:55:58.730248  500426 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865 for IP: 192.168.76.2
	I1227 20:55:58.730280  500426 certs.go:195] generating shared ca certs ...
	I1227 20:55:58.730296  500426 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:55:58.730559  500426 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:55:58.730656  500426 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:55:58.730688  500426 certs.go:257] generating profile certs ...
	I1227 20:55:58.730867  500426 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/client.key
	I1227 20:55:58.731006  500426 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/apiserver.key.b049a295
	I1227 20:55:58.731070  500426 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/proxy-client.key
	I1227 20:55:58.731244  500426 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:55:58.731316  500426 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:55:58.731337  500426 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:55:58.731391  500426 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:55:58.731458  500426 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:55:58.731510  500426 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:55:58.731590  500426 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:55:58.732267  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:55:58.756624  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:55:58.780509  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:55:58.801936  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:55:58.824671  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1227 20:55:58.849341  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 20:55:58.885908  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:55:58.909115  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/embed-certs-193865/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 20:55:58.928415  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:55:58.948081  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:55:58.970390  500426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:55:58.991142  500426 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:55:59.004873  500426 ssh_runner.go:195] Run: openssl version
	I1227 20:55:59.012370  500426 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:55:59.020139  500426 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:55:59.027605  500426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:55:59.031198  500426 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:55:59.031269  500426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:55:59.072328  500426 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:55:59.079711  500426 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:55:59.087050  500426 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:55:59.094369  500426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:55:59.098419  500426 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:55:59.098487  500426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:55:59.139302  500426 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:55:59.146837  500426 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:55:59.154188  500426 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:55:59.162142  500426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:55:59.167234  500426 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:55:59.167320  500426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:55:59.208752  500426 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:55:59.217172  500426 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:55:59.221587  500426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:55:59.265760  500426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:55:59.306781  500426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:55:59.347583  500426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:55:59.394387  500426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:55:59.443657  500426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:55:59.499094  500426 kubeadm.go:401] StartCluster: {Name:embed-certs-193865 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-193865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:55:59.499191  500426 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:55:59.499249  500426 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:55:59.542440  500426 cri.go:96] found id: "e6ca226eab1fb21c9058d6260555c9c845dbc797687277be4d78f9bff45c68ae"
	I1227 20:55:59.542463  500426 cri.go:96] found id: "042eda9613782dfa323700aa0d06a99229b8b2dd3a00161d5be2ccee081daeb7"
	I1227 20:55:59.542468  500426 cri.go:96] found id: ""
	I1227 20:55:59.542540  500426 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:55:59.573933  500426 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:55:59Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:55:59.574010  500426 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:55:59.589375  500426 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:55:59.589393  500426 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:55:59.589485  500426 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:55:59.601907  500426 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:55:59.602293  500426 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-193865" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:55:59.602388  500426 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-272475/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-193865" cluster setting kubeconfig missing "embed-certs-193865" context setting]
	I1227 20:55:59.602686  500426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:55:59.603807  500426 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:55:59.617607  500426 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 20:55:59.617639  500426 kubeadm.go:602] duration metric: took 28.240718ms to restartPrimaryControlPlane
	I1227 20:55:59.617649  500426 kubeadm.go:403] duration metric: took 118.56788ms to StartCluster
	I1227 20:55:59.617664  500426 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:55:59.617736  500426 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:55:59.619157  500426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:55:59.619791  500426 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:55:59.620702  500426 config.go:182] Loaded profile config "embed-certs-193865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:55:59.620756  500426 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:55:59.620924  500426 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-193865"
	I1227 20:55:59.620948  500426 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-193865"
	W1227 20:55:59.620964  500426 addons.go:248] addon storage-provisioner should already be in state true
	I1227 20:55:59.620986  500426 host.go:66] Checking if "embed-certs-193865" exists ...
	I1227 20:55:59.621574  500426 cli_runner.go:164] Run: docker container inspect embed-certs-193865 --format={{.State.Status}}
	I1227 20:55:59.621786  500426 addons.go:70] Setting dashboard=true in profile "embed-certs-193865"
	I1227 20:55:59.621813  500426 addons.go:239] Setting addon dashboard=true in "embed-certs-193865"
	W1227 20:55:59.621820  500426 addons.go:248] addon dashboard should already be in state true
	I1227 20:55:59.621844  500426 host.go:66] Checking if "embed-certs-193865" exists ...
	I1227 20:55:59.622375  500426 cli_runner.go:164] Run: docker container inspect embed-certs-193865 --format={{.State.Status}}
	I1227 20:55:59.624690  500426 addons.go:70] Setting default-storageclass=true in profile "embed-certs-193865"
	I1227 20:55:59.624750  500426 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-193865"
	I1227 20:55:59.631687  500426 out.go:179] * Verifying Kubernetes components...
	I1227 20:55:59.633025  500426 cli_runner.go:164] Run: docker container inspect embed-certs-193865 --format={{.State.Status}}
	I1227 20:55:59.635583  500426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:55:59.681668  500426 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 20:55:59.684700  500426 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:55:59.687629  500426 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:55:59.687652  500426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:55:59.687721  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:59.687934  500426 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 20:55:59.691523  500426 addons.go:239] Setting addon default-storageclass=true in "embed-certs-193865"
	W1227 20:55:59.691549  500426 addons.go:248] addon default-storageclass should already be in state true
	I1227 20:55:59.691572  500426 host.go:66] Checking if "embed-certs-193865" exists ...
	I1227 20:55:59.692037  500426 cli_runner.go:164] Run: docker container inspect embed-certs-193865 --format={{.State.Status}}
	I1227 20:55:59.692433  500426 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 20:55:59.692456  500426 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 20:55:59.692503  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:59.737419  500426 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:55:59.737440  500426 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:55:59.737534  500426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-193865
	I1227 20:55:59.753577  500426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:55:59.768175  500426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:55:59.784074  500426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/embed-certs-193865/id_rsa Username:docker}
	I1227 20:55:59.952072  500426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:55:59.955025  500426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:56:00.084411  500426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:56:00.122201  500426 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 20:56:00.122230  500426 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 20:56:00.213206  500426 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:56:00.213233  500426 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:56:00.294003  500426 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:56:00.294086  500426 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:56:00.350901  500426 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:56:00.350927  500426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:56:00.371697  500426 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:56:00.371792  500426 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:56:00.400511  500426 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:56:00.400603  500426 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:56:00.486751  500426 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:56:00.486839  500426 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:56:00.511202  500426 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:56:00.511280  500426 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:56:00.530085  500426 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:56:00.530167  500426 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:56:00.554074  500426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:56:04.199287  500426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.247133339s)
	I1227 20:56:04.199345  500426 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.244298922s)
	I1227 20:56:04.199377  500426 node_ready.go:35] waiting up to 6m0s for node "embed-certs-193865" to be "Ready" ...
	I1227 20:56:04.199691  500426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.115251169s)
	I1227 20:56:04.253208  500426 node_ready.go:49] node "embed-certs-193865" is "Ready"
	I1227 20:56:04.253285  500426 node_ready.go:38] duration metric: took 53.889485ms for node "embed-certs-193865" to be "Ready" ...
	I1227 20:56:04.253313  500426 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:56:04.253397  500426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:56:04.367113  500426 api_server.go:72] duration metric: took 4.747283511s to wait for apiserver process to appear ...
	I1227 20:56:04.367138  500426 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:56:04.367159  500426 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:56:04.367541  500426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.813374429s)
	I1227 20:56:04.378569  500426 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-193865 addons enable metrics-server
	
	I1227 20:56:04.382738  500426 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1227 20:56:04.383462  500426 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 20:56:04.383484  500426 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 20:56:04.386467  500426 addons.go:530] duration metric: took 4.765712745s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1227 20:56:04.868101  500426 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:56:04.876068  500426 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 20:56:04.877160  500426 api_server.go:141] control plane version: v1.35.0
	I1227 20:56:04.877217  500426 api_server.go:131] duration metric: took 510.070791ms to wait for apiserver health ...
	I1227 20:56:04.877241  500426 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:56:04.880954  500426 system_pods.go:59] 8 kube-system pods found
	I1227 20:56:04.880995  500426 system_pods.go:61] "coredns-7d764666f9-xj2kx" [bb4db36b-a468-42ed-a57d-07d66fd3677f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:56:04.881006  500426 system_pods.go:61] "etcd-embed-certs-193865" [189ff29b-8bc4-4c48-8fd1-32e246482296] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:56:04.881021  500426 system_pods.go:61] "kindnet-fqnrt" [6f652890-8212-487a-a479-ac54591d0db0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:56:04.881029  500426 system_pods.go:61] "kube-apiserver-embed-certs-193865" [e71ae5e1-2d4c-413e-990a-9c9b539d62fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:56:04.881038  500426 system_pods.go:61] "kube-controller-manager-embed-certs-193865" [ad24b721-376b-410f-8d96-c7b80585aa44] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:56:04.881045  500426 system_pods.go:61] "kube-proxy-5mf9z" [2c7bfa55-35a8-4519-8282-2bd750cbc449] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:56:04.881061  500426 system_pods.go:61] "kube-scheduler-embed-certs-193865" [c0d62251-0ec8-4ac3-a396-5574e3a92155] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:56:04.881068  500426 system_pods.go:61] "storage-provisioner" [eaf08c7a-30b7-4c09-a98e-4b9be46b8f8d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:56:04.881075  500426 system_pods.go:74] duration metric: took 3.81418ms to wait for pod list to return data ...
	I1227 20:56:04.881088  500426 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:56:04.883771  500426 default_sa.go:45] found service account: "default"
	I1227 20:56:04.883800  500426 default_sa.go:55] duration metric: took 2.706584ms for default service account to be created ...
	I1227 20:56:04.883810  500426 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:56:04.886457  500426 system_pods.go:86] 8 kube-system pods found
	I1227 20:56:04.886493  500426 system_pods.go:89] "coredns-7d764666f9-xj2kx" [bb4db36b-a468-42ed-a57d-07d66fd3677f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:56:04.886503  500426 system_pods.go:89] "etcd-embed-certs-193865" [189ff29b-8bc4-4c48-8fd1-32e246482296] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:56:04.886522  500426 system_pods.go:89] "kindnet-fqnrt" [6f652890-8212-487a-a479-ac54591d0db0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:56:04.886534  500426 system_pods.go:89] "kube-apiserver-embed-certs-193865" [e71ae5e1-2d4c-413e-990a-9c9b539d62fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:56:04.886542  500426 system_pods.go:89] "kube-controller-manager-embed-certs-193865" [ad24b721-376b-410f-8d96-c7b80585aa44] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:56:04.886552  500426 system_pods.go:89] "kube-proxy-5mf9z" [2c7bfa55-35a8-4519-8282-2bd750cbc449] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:56:04.886559  500426 system_pods.go:89] "kube-scheduler-embed-certs-193865" [c0d62251-0ec8-4ac3-a396-5574e3a92155] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:56:04.886569  500426 system_pods.go:89] "storage-provisioner" [eaf08c7a-30b7-4c09-a98e-4b9be46b8f8d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:56:04.886575  500426 system_pods.go:126] duration metric: took 2.759998ms to wait for k8s-apps to be running ...
	I1227 20:56:04.886582  500426 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:56:04.886636  500426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:56:04.927495  500426 system_svc.go:56] duration metric: took 40.901685ms WaitForService to wait for kubelet
	I1227 20:56:04.927569  500426 kubeadm.go:587] duration metric: took 5.307742723s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:56:04.927620  500426 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:56:04.932672  500426 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:56:04.932748  500426 node_conditions.go:123] node cpu capacity is 2
	I1227 20:56:04.932775  500426 node_conditions.go:105] duration metric: took 5.136285ms to run NodePressure ...
	I1227 20:56:04.932803  500426 start.go:242] waiting for startup goroutines ...
	I1227 20:56:04.932835  500426 start.go:247] waiting for cluster config update ...
	I1227 20:56:04.932867  500426 start.go:256] writing updated cluster config ...
	I1227 20:56:04.933194  500426 ssh_runner.go:195] Run: rm -f paused
	I1227 20:56:04.937107  500426 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:56:04.940537  500426 pod_ready.go:83] waiting for pod "coredns-7d764666f9-xj2kx" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 20:56:06.947448  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:08.948579  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:11.446726  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:13.947114  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:16.446484  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:18.446845  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:20.947128  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:23.447074  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:25.945434  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:27.946117  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:30.446262  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:32.946176  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:35.446104  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:37.946848  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	W1227 20:56:40.445532  500426 pod_ready.go:104] pod "coredns-7d764666f9-xj2kx" is not "Ready", error: <nil>
	I1227 20:56:40.945313  500426 pod_ready.go:94] pod "coredns-7d764666f9-xj2kx" is "Ready"
	I1227 20:56:40.945341  500426 pod_ready.go:86] duration metric: took 36.004752161s for pod "coredns-7d764666f9-xj2kx" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:40.947871  500426 pod_ready.go:83] waiting for pod "etcd-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:40.952126  500426 pod_ready.go:94] pod "etcd-embed-certs-193865" is "Ready"
	I1227 20:56:40.952153  500426 pod_ready.go:86] duration metric: took 4.254545ms for pod "etcd-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:40.954269  500426 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:40.959618  500426 pod_ready.go:94] pod "kube-apiserver-embed-certs-193865" is "Ready"
	I1227 20:56:40.959651  500426 pod_ready.go:86] duration metric: took 5.358901ms for pod "kube-apiserver-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:40.961844  500426 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:41.143910  500426 pod_ready.go:94] pod "kube-controller-manager-embed-certs-193865" is "Ready"
	I1227 20:56:41.143940  500426 pod_ready.go:86] duration metric: took 182.073077ms for pod "kube-controller-manager-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:41.343860  500426 pod_ready.go:83] waiting for pod "kube-proxy-5mf9z" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:41.743631  500426 pod_ready.go:94] pod "kube-proxy-5mf9z" is "Ready"
	I1227 20:56:41.743662  500426 pod_ready.go:86] duration metric: took 399.772644ms for pod "kube-proxy-5mf9z" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:41.943936  500426 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:42.344573  500426 pod_ready.go:94] pod "kube-scheduler-embed-certs-193865" is "Ready"
	I1227 20:56:42.344603  500426 pod_ready.go:86] duration metric: took 400.639363ms for pod "kube-scheduler-embed-certs-193865" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:56:42.344616  500426 pod_ready.go:40] duration metric: took 37.407451484s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:56:42.400279  500426 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 20:56:42.403909  500426 out.go:203] 
	W1227 20:56:42.407005  500426 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 20:56:42.410080  500426 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:56:42.413213  500426 out.go:179] * Done! kubectl is now configured to use "embed-certs-193865" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.641170864Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.645747124Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.645882324Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.645914356Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.651099781Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.65113475Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.651156099Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.654780647Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.654815198Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.654839206Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.658694935Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:56:44 embed-certs-193865 crio[654]: time="2025-12-27T20:56:44.658723365Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:56:49 embed-certs-193865 crio[654]: time="2025-12-27T20:56:49.955146623Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=165ccd99-a8d2-460b-a867-24f20b090613 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:56:49 embed-certs-193865 crio[654]: time="2025-12-27T20:56:49.956090843Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cec001e0-b48e-4b81-ab61-f096979e315a name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:56:49 embed-certs-193865 crio[654]: time="2025-12-27T20:56:49.957064396Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz/dashboard-metrics-scraper" id=5378f628-ecc0-4468-a610-da9f3dc2e3cb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:56:49 embed-certs-193865 crio[654]: time="2025-12-27T20:56:49.957157152Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:56:49 embed-certs-193865 crio[654]: time="2025-12-27T20:56:49.963879577Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:56:49 embed-certs-193865 crio[654]: time="2025-12-27T20:56:49.96567494Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:56:49 embed-certs-193865 crio[654]: time="2025-12-27T20:56:49.982176829Z" level=info msg="Created container 8aa96fff450baf4425feed3f8caffc607682e228422def93111eb887ed139977: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz/dashboard-metrics-scraper" id=5378f628-ecc0-4468-a610-da9f3dc2e3cb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:56:49 embed-certs-193865 crio[654]: time="2025-12-27T20:56:49.983934335Z" level=info msg="Starting container: 8aa96fff450baf4425feed3f8caffc607682e228422def93111eb887ed139977" id=046410d8-d74c-4ce9-8f9f-c41e703688b5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:56:49 embed-certs-193865 crio[654]: time="2025-12-27T20:56:49.985999665Z" level=info msg="Started container" PID=1742 containerID=8aa96fff450baf4425feed3f8caffc607682e228422def93111eb887ed139977 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz/dashboard-metrics-scraper id=046410d8-d74c-4ce9-8f9f-c41e703688b5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8a5e13fe6839f416e5fd7e9050cdd4c63bb5e95f56df123e41a15c33aee91e8b
	Dec 27 20:56:49 embed-certs-193865 conmon[1740]: conmon 8aa96fff450baf4425fe <ninfo>: container 1742 exited with status 1
	Dec 27 20:56:50 embed-certs-193865 crio[654]: time="2025-12-27T20:56:50.236721755Z" level=info msg="Removing container: c358c768b3adb42665b80284fa9c986928f9b4359bd0429fba18e82301a41512" id=bf153da0-e2b6-472c-b9da-e5a4ddf43369 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:56:50 embed-certs-193865 crio[654]: time="2025-12-27T20:56:50.266181137Z" level=info msg="Error loading conmon cgroup of container c358c768b3adb42665b80284fa9c986928f9b4359bd0429fba18e82301a41512: cgroup deleted" id=bf153da0-e2b6-472c-b9da-e5a4ddf43369 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:56:50 embed-certs-193865 crio[654]: time="2025-12-27T20:56:50.272570229Z" level=info msg="Removed container c358c768b3adb42665b80284fa9c986928f9b4359bd0429fba18e82301a41512: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz/dashboard-metrics-scraper" id=bf153da0-e2b6-472c-b9da-e5a4ddf43369 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8aa96fff450ba       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago        Exited              dashboard-metrics-scraper   3                   8a5e13fe6839f       dashboard-metrics-scraper-867fb5f87b-p8jjz   kubernetes-dashboard
	be8b934ba2d3a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   7bb55720f14d9       storage-provisioner                          kube-system
	82f9c7926d9e0       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   8dc5bfd7649f6       kubernetes-dashboard-b84665fb8-44qk4         kubernetes-dashboard
	de35335f8b6e8       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   d4b2ad768e893       busybox                                      default
	4af7ead68be14       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           55 seconds ago       Running             coredns                     1                   639ea4b96e9a3       coredns-7d764666f9-xj2kx                     kube-system
	1727e2655810d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   7bb55720f14d9       storage-provisioner                          kube-system
	ca8c3de3fdc21       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           55 seconds ago       Running             kindnet-cni                 1                   c15d8b555831e       kindnet-fqnrt                                kube-system
	da7a6ea56aa7b       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           55 seconds ago       Running             kube-proxy                  1                   15aa91376cda8       kube-proxy-5mf9z                             kube-system
	3dfb4788db04d       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           About a minute ago   Running             kube-scheduler              1                   20eba3cdf6fa7       kube-scheduler-embed-certs-193865            kube-system
	4e5cabfe80bde       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           About a minute ago   Running             kube-controller-manager     1                   95715ee181dbb       kube-controller-manager-embed-certs-193865   kube-system
	e6ca226eab1fb       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           About a minute ago   Running             kube-apiserver              1                   29ab97bb8a6dd       kube-apiserver-embed-certs-193865            kube-system
	042eda9613782       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           About a minute ago   Running             etcd                        1                   ee457a87f1344       etcd-embed-certs-193865                      kube-system
	
	
	==> coredns [4af7ead68be14764fdb90b14930b698a11839ca32ca4aad38127b0a1c26f10ea] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:59208 - 55852 "HINFO IN 8556797303068945170.8015881326220980466. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022152962s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-193865
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-193865
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=embed-certs-193865
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_55_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:55:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-193865
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:56:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:56:33 +0000   Sat, 27 Dec 2025 20:55:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:56:33 +0000   Sat, 27 Dec 2025 20:55:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:56:33 +0000   Sat, 27 Dec 2025 20:55:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:56:33 +0000   Sat, 27 Dec 2025 20:55:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-193865
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                5669f867-44f3-47ed-a81f-7695205dabf5
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-7d764666f9-xj2kx                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     106s
	  kube-system                 etcd-embed-certs-193865                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         111s
	  kube-system                 kindnet-fqnrt                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-embed-certs-193865             250m (12%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-embed-certs-193865    200m (10%)    0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-5mf9z                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-embed-certs-193865             100m (5%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-p8jjz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-44qk4          0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  108s  node-controller  Node embed-certs-193865 event: Registered Node embed-certs-193865 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node embed-certs-193865 event: Registered Node embed-certs-193865 in Controller
	
	
	==> dmesg <==
	[Dec27 20:23] overlayfs: idmapped layers are currently not supported
	[Dec27 20:24] overlayfs: idmapped layers are currently not supported
	[Dec27 20:25] overlayfs: idmapped layers are currently not supported
	[ +35.447549] overlayfs: idmapped layers are currently not supported
	[Dec27 20:26] overlayfs: idmapped layers are currently not supported
	[Dec27 20:27] overlayfs: idmapped layers are currently not supported
	[  +6.770645] overlayfs: idmapped layers are currently not supported
	[Dec27 20:28] overlayfs: idmapped layers are currently not supported
	[ +25.872751] overlayfs: idmapped layers are currently not supported
	[Dec27 20:29] overlayfs: idmapped layers are currently not supported
	[ +32.997137] overlayfs: idmapped layers are currently not supported
	[Dec27 20:31] overlayfs: idmapped layers are currently not supported
	[Dec27 20:33] overlayfs: idmapped layers are currently not supported
	[ +33.772475] overlayfs: idmapped layers are currently not supported
	[Dec27 20:39] overlayfs: idmapped layers are currently not supported
	[Dec27 20:40] overlayfs: idmapped layers are currently not supported
	[Dec27 20:44] overlayfs: idmapped layers are currently not supported
	[Dec27 20:45] overlayfs: idmapped layers are currently not supported
	[Dec27 20:49] overlayfs: idmapped layers are currently not supported
	[Dec27 20:50] overlayfs: idmapped layers are currently not supported
	[Dec27 20:51] overlayfs: idmapped layers are currently not supported
	[Dec27 20:52] overlayfs: idmapped layers are currently not supported
	[Dec27 20:53] overlayfs: idmapped layers are currently not supported
	[Dec27 20:55] overlayfs: idmapped layers are currently not supported
	[ +57.272039] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [042eda9613782dfa323700aa0d06a99229b8b2dd3a00161d5be2ccee081daeb7] <==
	{"level":"info","ts":"2025-12-27T20:55:59.991482Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:55:59.991492Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T20:55:59.991684Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T20:55:59.991695Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T20:55:59.992723Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T20:55:59.992820Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T20:55:59.992880Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T20:56:00.285843Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:56:00.285997Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:56:00.286073Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:56:00.286116Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:56:00.286171Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:56:00.295859Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:56:00.296010Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:56:00.296067Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:56:00.296107Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:56:00.310150Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-193865 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:56:00.310461Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:56:00.310672Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:56:00.311755Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:56:00.324729Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:56:00.330307Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:56:00.339777Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:56:00.339849Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:56:00.412651Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 20:56:59 up  2:39,  0 user,  load average: 1.29, 1.29, 1.63
	Linux embed-certs-193865 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ca8c3de3fdc21beb2e56b12111a476cd88bb9d76e087d7bc994a71989d012ece] <==
	I1227 20:56:04.449911       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:56:04.450082       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 20:56:04.450199       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:56:04.450210       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:56:04.450221       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:56:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:56:04.637272       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:56:04.637350       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:56:04.637386       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:56:04.637838       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 20:56:34.637372       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1227 20:56:34.637383       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 20:56:34.638302       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 20:56:34.638314       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1227 20:56:35.838238       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:56:35.838267       1 metrics.go:72] Registering metrics
	I1227 20:56:35.838329       1 controller.go:711] "Syncing nftables rules"
	I1227 20:56:44.637346       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:56:44.637399       1 main.go:301] handling current node
	I1227 20:56:54.639656       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:56:54.639690       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e6ca226eab1fb21c9058d6260555c9c845dbc797687277be4d78f9bff45c68ae] <==
	I1227 20:56:03.178727       1 aggregator.go:187] initial CRD sync complete...
	I1227 20:56:03.178751       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 20:56:03.178757       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:56:03.178764       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:56:03.190269       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:56:03.202508       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:56:03.231639       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 20:56:03.231726       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 20:56:03.231770       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 20:56:03.231969       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 20:56:03.243756       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 20:56:03.245716       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:03.260991       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1227 20:56:03.302585       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 20:56:03.747080       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:56:03.792026       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:56:03.867595       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:56:03.872312       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:56:03.962958       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:56:03.993706       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:56:04.302401       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.90.159"}
	I1227 20:56:04.354531       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.184.114"}
	I1227 20:56:06.523021       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:56:06.774602       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:56:06.822543       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4e5cabfe80bde33d172c974ffd714e8d551a86c345273a6f54f995aca0fd5be9] <==
	I1227 20:56:06.136269       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.136489       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.136589       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.136796       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.136972       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.137098       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.138103       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.138579       1 range_allocator.go:177] "Sending events to api server"
	I1227 20:56:06.138663       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 20:56:06.138692       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:56:06.138722       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.138841       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.139045       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.139091       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.141900       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:56:06.142784       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.142978       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.143398       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.166699       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.229952       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.229978       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:56:06.229984       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:56:06.243979       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:06.829057       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1227 20:56:06.830736       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [da7a6ea56aa7b1cc7394b633af80fcf03ded1031c60eee45e624da67ab4f23e0] <==
	I1227 20:56:04.507744       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:56:04.618932       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:56:04.719946       1 shared_informer.go:377] "Caches are synced"
	I1227 20:56:04.719981       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 20:56:04.720063       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:56:04.742880       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:56:04.742929       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:56:04.747004       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:56:04.747299       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:56:04.747318       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:56:04.748382       1 config.go:200] "Starting service config controller"
	I1227 20:56:04.748400       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:56:04.751639       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:56:04.751726       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:56:04.751770       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:56:04.751797       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:56:04.752086       1 config.go:309] "Starting node config controller"
	I1227 20:56:04.752106       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:56:04.848575       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:56:04.851806       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 20:56:04.851947       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 20:56:04.852263       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [3dfb4788db04d24ff921ca961d74a35736ceb9dcb271f67d3eef434cef1c7725] <==
	I1227 20:56:01.350522       1 serving.go:386] Generated self-signed cert in-memory
	W1227 20:56:03.009101       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 20:56:03.012614       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 20:56:03.012707       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 20:56:03.012716       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 20:56:03.163928       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 20:56:03.168755       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:56:03.170980       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 20:56:03.171121       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:56:03.171132       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:56:03.171147       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 20:56:03.271584       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:56:17 embed-certs-193865 kubelet[781]: E1227 20:56:17.139558     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-44qk4" containerName="kubernetes-dashboard"
	Dec 27 20:56:17 embed-certs-193865 kubelet[781]: E1227 20:56:17.765675     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz" containerName="dashboard-metrics-scraper"
	Dec 27 20:56:17 embed-certs-193865 kubelet[781]: I1227 20:56:17.765724     781 scope.go:122] "RemoveContainer" containerID="55e33c20479d31e2b0cd7977c093d8bb81f5bcb1aa82240fc621bc4ced59e3dc"
	Dec 27 20:56:17 embed-certs-193865 kubelet[781]: E1227 20:56:17.765921     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-p8jjz_kubernetes-dashboard(bbc1d301-7652-4cfe-b1a7-d1b8e21d8c76)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz" podUID="bbc1d301-7652-4cfe-b1a7-d1b8e21d8c76"
	Dec 27 20:56:22 embed-certs-193865 kubelet[781]: E1227 20:56:22.954786     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz" containerName="dashboard-metrics-scraper"
	Dec 27 20:56:22 embed-certs-193865 kubelet[781]: I1227 20:56:22.954827     781 scope.go:122] "RemoveContainer" containerID="55e33c20479d31e2b0cd7977c093d8bb81f5bcb1aa82240fc621bc4ced59e3dc"
	Dec 27 20:56:23 embed-certs-193865 kubelet[781]: I1227 20:56:23.154067     781 scope.go:122] "RemoveContainer" containerID="55e33c20479d31e2b0cd7977c093d8bb81f5bcb1aa82240fc621bc4ced59e3dc"
	Dec 27 20:56:23 embed-certs-193865 kubelet[781]: E1227 20:56:23.154352     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz" containerName="dashboard-metrics-scraper"
	Dec 27 20:56:23 embed-certs-193865 kubelet[781]: I1227 20:56:23.154379     781 scope.go:122] "RemoveContainer" containerID="c358c768b3adb42665b80284fa9c986928f9b4359bd0429fba18e82301a41512"
	Dec 27 20:56:23 embed-certs-193865 kubelet[781]: E1227 20:56:23.154554     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-p8jjz_kubernetes-dashboard(bbc1d301-7652-4cfe-b1a7-d1b8e21d8c76)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz" podUID="bbc1d301-7652-4cfe-b1a7-d1b8e21d8c76"
	Dec 27 20:56:23 embed-certs-193865 kubelet[781]: I1227 20:56:23.171764     781 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-44qk4" podStartSLOduration=8.903759077 podStartE2EDuration="17.171750123s" podCreationTimestamp="2025-12-27 20:56:06 +0000 UTC" firstStartedPulling="2025-12-27 20:56:06.972575636 +0000 UTC m=+8.239529353" lastFinishedPulling="2025-12-27 20:56:15.240566683 +0000 UTC m=+16.507520399" observedRunningTime="2025-12-27 20:56:16.150867705 +0000 UTC m=+17.417821422" watchObservedRunningTime="2025-12-27 20:56:23.171750123 +0000 UTC m=+24.438703839"
	Dec 27 20:56:27 embed-certs-193865 kubelet[781]: E1227 20:56:27.766320     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz" containerName="dashboard-metrics-scraper"
	Dec 27 20:56:27 embed-certs-193865 kubelet[781]: I1227 20:56:27.766805     781 scope.go:122] "RemoveContainer" containerID="c358c768b3adb42665b80284fa9c986928f9b4359bd0429fba18e82301a41512"
	Dec 27 20:56:27 embed-certs-193865 kubelet[781]: E1227 20:56:27.767051     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-p8jjz_kubernetes-dashboard(bbc1d301-7652-4cfe-b1a7-d1b8e21d8c76)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz" podUID="bbc1d301-7652-4cfe-b1a7-d1b8e21d8c76"
	Dec 27 20:56:35 embed-certs-193865 kubelet[781]: I1227 20:56:35.184994     781 scope.go:122] "RemoveContainer" containerID="1727e2655810dd7761bc82b6eccd9673976f5a22e959a68cb5eb1c718414ce6d"
	Dec 27 20:56:40 embed-certs-193865 kubelet[781]: E1227 20:56:40.553118     781 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-xj2kx" containerName="coredns"
	Dec 27 20:56:49 embed-certs-193865 kubelet[781]: E1227 20:56:49.954456     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz" containerName="dashboard-metrics-scraper"
	Dec 27 20:56:49 embed-certs-193865 kubelet[781]: I1227 20:56:49.954506     781 scope.go:122] "RemoveContainer" containerID="c358c768b3adb42665b80284fa9c986928f9b4359bd0429fba18e82301a41512"
	Dec 27 20:56:50 embed-certs-193865 kubelet[781]: I1227 20:56:50.234698     781 scope.go:122] "RemoveContainer" containerID="c358c768b3adb42665b80284fa9c986928f9b4359bd0429fba18e82301a41512"
	Dec 27 20:56:50 embed-certs-193865 kubelet[781]: E1227 20:56:50.235008     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz" containerName="dashboard-metrics-scraper"
	Dec 27 20:56:50 embed-certs-193865 kubelet[781]: I1227 20:56:50.235037     781 scope.go:122] "RemoveContainer" containerID="8aa96fff450baf4425feed3f8caffc607682e228422def93111eb887ed139977"
	Dec 27 20:56:50 embed-certs-193865 kubelet[781]: E1227 20:56:50.235199     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-p8jjz_kubernetes-dashboard(bbc1d301-7652-4cfe-b1a7-d1b8e21d8c76)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p8jjz" podUID="bbc1d301-7652-4cfe-b1a7-d1b8e21d8c76"
	Dec 27 20:56:54 embed-certs-193865 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:56:54 embed-certs-193865 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:56:54 embed-certs-193865 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [82f9c7926d9e00ef4eee7b452a712b6517c6239daad7d110dbea66322be1a9fe] <==
	2025/12/27 20:56:15 Starting overwatch
	2025/12/27 20:56:15 Using namespace: kubernetes-dashboard
	2025/12/27 20:56:15 Using in-cluster config to connect to apiserver
	2025/12/27 20:56:15 Using secret token for csrf signing
	2025/12/27 20:56:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 20:56:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 20:56:15 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 20:56:15 Generating JWE encryption key
	2025/12/27 20:56:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 20:56:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 20:56:15 Initializing JWE encryption key from synchronized object
	2025/12/27 20:56:15 Creating in-cluster Sidecar client
	2025/12/27 20:56:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:56:15 Serving insecurely on HTTP port: 9090
	2025/12/27 20:56:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1727e2655810dd7761bc82b6eccd9673976f5a22e959a68cb5eb1c718414ce6d] <==
	I1227 20:56:04.428680       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 20:56:34.457813       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [be8b934ba2d3ac38f4d68377967b793ddd2b8910b8768fcc498117296522c796] <==
	I1227 20:56:35.255903       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 20:56:35.267747       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:56:35.267877       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 20:56:35.271262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:38.727070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:42.987439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:46.585490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:49.639048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:52.662570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:52.667326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:56:52.667470       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:56:52.667646       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-193865_a288591f-342f-47c1-b354-ee80038e80b3!
	I1227 20:56:52.673548       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"973a1ef1-d110-4815-b972-77baf58b2ed2", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-193865_a288591f-342f-47c1-b354-ee80038e80b3 became leader
	W1227 20:56:52.679420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:52.689677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:56:52.768031       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-193865_a288591f-342f-47c1-b354-ee80038e80b3!
	W1227 20:56:54.692726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:54.698185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:56.702295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:56.708150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:58.710713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:56:58.717794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-193865 -n embed-certs-193865
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-193865 -n embed-certs-193865: exit status 2 (397.320899ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-193865 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-549946 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-549946 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (321.122757ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:58:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-549946 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-549946
helpers_test.go:244: (dbg) docker inspect newest-cni-549946:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522",
	        "Created": "2025-12-27T20:57:32.376707101Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 508423,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:57:32.46187551Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522/hostname",
	        "HostsPath": "/var/lib/docker/containers/33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522/hosts",
	        "LogPath": "/var/lib/docker/containers/33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522/33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522-json.log",
	        "Name": "/newest-cni-549946",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-549946:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-549946",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522",
	                "LowerDir": "/var/lib/docker/overlay2/982c2034b4244173858000f623c93c04ec27cc043c3dc430bac371ee9def442a-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/982c2034b4244173858000f623c93c04ec27cc043c3dc430bac371ee9def442a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/982c2034b4244173858000f623c93c04ec27cc043c3dc430bac371ee9def442a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/982c2034b4244173858000f623c93c04ec27cc043c3dc430bac371ee9def442a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-549946",
	                "Source": "/var/lib/docker/volumes/newest-cni-549946/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-549946",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-549946",
	                "name.minikube.sigs.k8s.io": "newest-cni-549946",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c3ad22b24a3d750c986f0b8f50b9cf53323b8dc40a1376daca6b579ce018c566",
	            "SandboxKey": "/var/run/docker/netns/c3ad22b24a3d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-549946": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:eb:d1:26:63:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b57edbc724b90e99751e5881513e82042c8927605a9433275af7712a02f70992",
	                    "EndpointID": "72654035c9ba6d2456b1e2a206261575982e83bff39f6811884d011615a6ad93",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-549946",
	                        "33026e33441a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-549946 -n newest-cni-549946
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-549946 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-549946 logs -n 25: (1.37475225s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-855707                                                                                                                                                                                                                     │ old-k8s-version-855707       │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:52 UTC │
	│ start   │ -p default-k8s-diff-port-058924 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:52 UTC │ 27 Dec 25 20:53 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-058924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-058924 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-058924 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
	│ start   │ -p default-k8s-diff-port-058924 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:54 UTC │
	│ image   │ default-k8s-diff-port-058924 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ pause   │ -p default-k8s-diff-port-058924 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-058924                                                                                                                                                                                                               │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ delete  │ -p default-k8s-diff-port-058924                                                                                                                                                                                                               │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ start   │ -p embed-certs-193865 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-193865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │                     │
	│ stop    │ -p embed-certs-193865 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │ 27 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p embed-certs-193865 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │ 27 Dec 25 20:55 UTC │
	│ start   │ -p embed-certs-193865 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │ 27 Dec 25 20:56 UTC │
	│ image   │ embed-certs-193865 image list --format=json                                                                                                                                                                                                   │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:56 UTC │ 27 Dec 25 20:56 UTC │
	│ pause   │ -p embed-certs-193865 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:56 UTC │                     │
	│ delete  │ -p embed-certs-193865                                                                                                                                                                                                                         │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ delete  │ -p embed-certs-193865                                                                                                                                                                                                                         │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ delete  │ -p disable-driver-mounts-371621                                                                                                                                                                                                               │ disable-driver-mounts-371621 │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ start   │ -p no-preload-542467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-542467            │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │                     │
	│ ssh     │ force-systemd-flag-604544 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-604544    │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ delete  │ -p force-systemd-flag-604544                                                                                                                                                                                                                  │ force-systemd-flag-604544    │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ start   │ -p newest-cni-549946 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-549946            │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:58 UTC │
	│ addons  │ enable metrics-server -p newest-cni-549946 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-549946            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:57:26
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:57:26.504560  507792 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:57:26.504757  507792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:57:26.504783  507792 out.go:374] Setting ErrFile to fd 2...
	I1227 20:57:26.504804  507792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:57:26.505092  507792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:57:26.505590  507792 out.go:368] Setting JSON to false
	I1227 20:57:26.506534  507792 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9599,"bootTime":1766859448,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:57:26.506637  507792 start.go:143] virtualization:  
	I1227 20:57:26.510823  507792 out.go:179] * [newest-cni-549946] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:57:26.515786  507792 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:57:26.515860  507792 notify.go:221] Checking for updates...
	I1227 20:57:26.523185  507792 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:57:26.526654  507792 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:57:26.530413  507792 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:57:26.533634  507792 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:57:26.537006  507792 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:57:26.540648  507792 config.go:182] Loaded profile config "no-preload-542467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:57:26.540787  507792 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:57:26.578122  507792 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:57:26.578224  507792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:57:26.671096  507792 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-27 20:57:26.66218308 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:57:26.671198  507792 docker.go:319] overlay module found
	I1227 20:57:26.674755  507792 out.go:179] * Using the docker driver based on user configuration
	I1227 20:57:26.677737  507792 start.go:309] selected driver: docker
	I1227 20:57:26.677751  507792 start.go:928] validating driver "docker" against <nil>
	I1227 20:57:26.677764  507792 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:57:26.678454  507792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:57:26.749195  507792 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-27 20:57:26.739529334 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:57:26.749349  507792 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W1227 20:57:26.749372  507792 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1227 20:57:26.749636  507792 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 20:57:26.753168  507792 out.go:179] * Using Docker driver with root privileges
	I1227 20:57:26.756278  507792 cni.go:84] Creating CNI manager for ""
	I1227 20:57:26.756342  507792 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:57:26.756351  507792 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 20:57:26.756427  507792 start.go:353] cluster config:
	{Name:newest-cni-549946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-549946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:57:26.759711  507792 out.go:179] * Starting "newest-cni-549946" primary control-plane node in "newest-cni-549946" cluster
	I1227 20:57:26.762560  507792 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:57:26.765720  507792 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:57:26.768914  507792 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:57:26.768958  507792 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:57:26.768967  507792 cache.go:65] Caching tarball of preloaded images
	I1227 20:57:26.769047  507792 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:57:26.769056  507792 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:57:26.769194  507792 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/config.json ...
	I1227 20:57:26.769213  507792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/config.json: {Name:mkcd26a4b9c4c6b94373795f549fdf8cff928a43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:57:26.769364  507792 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:57:26.792164  507792 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:57:26.792181  507792 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:57:26.792195  507792 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:57:26.792223  507792 start.go:360] acquireMachinesLock for newest-cni-549946: {Name:mk8b0ea7d2aaecab8531b3a335f669f52685ec48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:57:26.792314  507792 start.go:364] duration metric: took 77.035µs to acquireMachinesLock for "newest-cni-549946"
	I1227 20:57:26.792339  507792 start.go:93] Provisioning new machine with config: &{Name:newest-cni-549946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-549946 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:57:26.792407  507792 start.go:125] createHost starting for "" (driver="docker")
	I1227 20:57:24.385349  504634 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:57:24.393339  504634 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 20:57:24.407400  504634 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:57:24.421824  504634 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I1227 20:57:24.435226  504634 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:57:24.439959  504634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:57:24.453692  504634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:57:24.572484  504634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:57:24.589126  504634 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467 for IP: 192.168.76.2
	I1227 20:57:24.589148  504634 certs.go:195] generating shared ca certs ...
	I1227 20:57:24.589164  504634 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:57:24.589308  504634 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:57:24.589355  504634 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:57:24.589367  504634 certs.go:257] generating profile certs ...
	I1227 20:57:24.589430  504634 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/client.key
	I1227 20:57:24.589530  504634 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/client.crt with IP's: []
	I1227 20:57:25.266684  504634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/client.crt ...
	I1227 20:57:25.266718  504634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/client.crt: {Name:mke1263e9e5fe1699607f45c529418217bba68e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:57:25.266941  504634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/client.key ...
	I1227 20:57:25.266957  504634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/client.key: {Name:mkda52f81420427232a05eb87ee320499485f035 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:57:25.267057  504634 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/apiserver.key.7d5688d0
	I1227 20:57:25.267072  504634 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/apiserver.crt.7d5688d0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 20:57:25.867097  504634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/apiserver.crt.7d5688d0 ...
	I1227 20:57:25.867126  504634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/apiserver.crt.7d5688d0: {Name:mk65302fe4a153bc6399db3fd538b4e3eb2506fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:57:25.867298  504634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/apiserver.key.7d5688d0 ...
	I1227 20:57:25.867315  504634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/apiserver.key.7d5688d0: {Name:mkb81ba1ad25ac36dc64ec0937a5951c2dc600f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:57:25.867402  504634 certs.go:382] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/apiserver.crt.7d5688d0 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/apiserver.crt
	I1227 20:57:25.867488  504634 certs.go:386] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/apiserver.key.7d5688d0 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/apiserver.key
	I1227 20:57:25.867555  504634 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/proxy-client.key
	I1227 20:57:25.867576  504634 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/proxy-client.crt with IP's: []
	I1227 20:57:26.103527  504634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/proxy-client.crt ...
	I1227 20:57:26.103665  504634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/proxy-client.crt: {Name:mk0e5dedfc4d171e4665fb51187e5179483f6e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:57:26.103897  504634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/proxy-client.key ...
	I1227 20:57:26.103941  504634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/proxy-client.key: {Name:mk35bf3044681d65e827fe9793e9e1796eab9b23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:57:26.104186  504634 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:57:26.104266  504634 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:57:26.104307  504634 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:57:26.104374  504634 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:57:26.104432  504634 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:57:26.104483  504634 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:57:26.104570  504634 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:57:26.105211  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:57:26.129403  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:57:26.148730  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:57:26.167825  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:57:26.187692  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 20:57:26.218367  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:57:26.254346  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:57:26.284385  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:57:26.347914  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:57:26.369864  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:57:26.393865  504634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:57:26.418332  504634 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:57:26.432800  504634 ssh_runner.go:195] Run: openssl version
	I1227 20:57:26.441765  504634 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:57:26.462166  504634 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:57:26.470968  504634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:57:26.475219  504634 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:57:26.475291  504634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:57:26.517522  504634 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:57:26.526085  504634 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 20:57:26.534149  504634 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:57:26.542070  504634 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:57:26.550398  504634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:57:26.554682  504634 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:57:26.554750  504634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:57:26.601666  504634 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:57:26.609709  504634 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/274336.pem /etc/ssl/certs/51391683.0
	I1227 20:57:26.617499  504634 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:57:26.625603  504634 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:57:26.633197  504634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:57:26.637379  504634 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:57:26.637440  504634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:57:26.681591  504634 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:57:26.691576  504634 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2743362.pem /etc/ssl/certs/3ec20f2e.0
	I1227 20:57:26.707455  504634 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:57:26.714125  504634 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:57:26.714173  504634 kubeadm.go:401] StartCluster: {Name:no-preload-542467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-542467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:57:26.714242  504634 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:57:26.714308  504634 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:57:26.750103  504634 cri.go:96] found id: ""
	I1227 20:57:26.750159  504634 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:57:26.757870  504634 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 20:57:26.765432  504634 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:57:26.765516  504634 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:57:26.776076  504634 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:57:26.776098  504634 kubeadm.go:158] found existing configuration files:
	
	I1227 20:57:26.776146  504634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:57:26.787216  504634 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:57:26.787275  504634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:57:26.795313  504634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:57:26.804458  504634 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:57:26.804532  504634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:57:26.814772  504634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:57:26.824080  504634 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:57:26.824149  504634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:57:26.832889  504634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:57:26.842291  504634 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:57:26.842361  504634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:57:26.850424  504634 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:57:26.906866  504634 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:57:26.907307  504634 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:57:27.073586  504634 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:57:27.073666  504634 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 20:57:27.073710  504634 kubeadm.go:319] OS: Linux
	I1227 20:57:27.073774  504634 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:57:27.073830  504634 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 20:57:27.073885  504634 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:57:27.073934  504634 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:57:27.073982  504634 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:57:27.074033  504634 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:57:27.074079  504634 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:57:27.074127  504634 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:57:27.074172  504634 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 20:57:27.162169  504634 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:57:27.162287  504634 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:57:27.162390  504634 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:57:27.185681  504634 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 20:57:27.192557  504634 out.go:252]   - Generating certificates and keys ...
	I1227 20:57:27.192658  504634 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:57:27.192735  504634 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:57:27.429335  504634 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 20:57:28.281925  504634 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 20:57:28.646772  504634 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 20:57:28.922943  504634 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 20:57:29.274130  504634 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 20:57:29.277843  504634 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-542467] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 20:57:29.457916  504634 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 20:57:29.462074  504634 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-542467] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 20:57:29.607489  504634 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 20:57:29.952291  504634 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 20:57:30.042678  504634 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 20:57:30.043321  504634 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:57:30.424703  504634 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:57:30.489729  504634 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:57:30.582447  504634 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:57:30.890291  504634 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:57:30.981603  504634 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:57:30.982679  504634 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:57:30.985425  504634 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:57:26.796754  507792 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:57:26.796978  507792 start.go:159] libmachine.API.Create for "newest-cni-549946" (driver="docker")
	I1227 20:57:26.797006  507792 client.go:173] LocalClient.Create starting
	I1227 20:57:26.797057  507792 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem
	I1227 20:57:26.797090  507792 main.go:144] libmachine: Decoding PEM data...
	I1227 20:57:26.797105  507792 main.go:144] libmachine: Parsing certificate...
	I1227 20:57:26.797154  507792 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem
	I1227 20:57:26.797173  507792 main.go:144] libmachine: Decoding PEM data...
	I1227 20:57:26.797185  507792 main.go:144] libmachine: Parsing certificate...
	I1227 20:57:26.797599  507792 cli_runner.go:164] Run: docker network inspect newest-cni-549946 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:57:26.823301  507792 cli_runner.go:211] docker network inspect newest-cni-549946 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:57:26.823375  507792 network_create.go:284] running [docker network inspect newest-cni-549946] to gather additional debugging logs...
	I1227 20:57:26.823396  507792 cli_runner.go:164] Run: docker network inspect newest-cni-549946
	W1227 20:57:26.844229  507792 cli_runner.go:211] docker network inspect newest-cni-549946 returned with exit code 1
	I1227 20:57:26.844252  507792 network_create.go:287] error running [docker network inspect newest-cni-549946]: docker network inspect newest-cni-549946: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-549946 not found
	I1227 20:57:26.844264  507792 network_create.go:289] output of [docker network inspect newest-cni-549946]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-549946 not found
	
	** /stderr **
	I1227 20:57:26.844354  507792 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:57:26.867252  507792 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9521cb9225c5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:1d:ef:38:b7:a6} reservation:<nil>}
	I1227 20:57:26.867769  507792 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-68d11cc2ab47 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:8d:ad:37:cb:fe} reservation:<nil>}
	I1227 20:57:26.868043  507792 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d3b7cfff4895 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:4a:e3:08:10:2f} reservation:<nil>}
	I1227 20:57:26.868377  507792 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c1ebbbafc127 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7a:a5:c1:28:f3:5c} reservation:<nil>}
	I1227 20:57:26.868980  507792 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a755f0}
	I1227 20:57:26.869012  507792 network_create.go:124] attempt to create docker network newest-cni-549946 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 20:57:26.869128  507792 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-549946 newest-cni-549946
	I1227 20:57:26.963343  507792 network_create.go:108] docker network newest-cni-549946 192.168.85.0/24 created
	I1227 20:57:26.963381  507792 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-549946" container
	I1227 20:57:26.963485  507792 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:57:26.992234  507792 cli_runner.go:164] Run: docker volume create newest-cni-549946 --label name.minikube.sigs.k8s.io=newest-cni-549946 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:57:27.023142  507792 oci.go:103] Successfully created a docker volume newest-cni-549946
	I1227 20:57:27.023247  507792 cli_runner.go:164] Run: docker run --rm --name newest-cni-549946-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-549946 --entrypoint /usr/bin/test -v newest-cni-549946:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:57:27.651208  507792 oci.go:107] Successfully prepared a docker volume newest-cni-549946
	I1227 20:57:27.651294  507792 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:57:27.651311  507792 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 20:57:27.651380  507792 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-549946:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 20:57:31.006152  504634 out.go:252]   - Booting up control plane ...
	I1227 20:57:31.006277  504634 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:57:31.006365  504634 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:57:31.006440  504634 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:57:31.011528  504634 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:57:31.011888  504634 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:57:31.024656  504634 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:57:31.025008  504634 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:57:31.025224  504634 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:57:31.175539  504634 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:57:31.175669  504634 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 20:57:31.702266  504634 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 524.683838ms
	I1227 20:57:31.702770  504634 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 20:57:31.703080  504634 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1227 20:57:31.703375  504634 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 20:57:31.704086  504634 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 20:57:32.262509  507792 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-549946:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.611083534s)
	I1227 20:57:32.262538  507792 kic.go:203] duration metric: took 4.611224749s to extract preloaded images to volume ...
	W1227 20:57:32.262682  507792 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 20:57:32.262816  507792 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:57:32.356495  507792 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-549946 --name newest-cni-549946 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-549946 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-549946 --network newest-cni-549946 --ip 192.168.85.2 --volume newest-cni-549946:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:57:32.784090  507792 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Running}}
	I1227 20:57:32.803962  507792 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:57:32.827155  507792 cli_runner.go:164] Run: docker exec newest-cni-549946 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:57:32.885693  507792 oci.go:144] the created container "newest-cni-549946" has a running status.
	I1227 20:57:32.885731  507792 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa...
	I1227 20:57:33.106095  507792 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:57:33.150729  507792 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:57:33.175926  507792 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:57:33.175945  507792 kic_runner.go:114] Args: [docker exec --privileged newest-cni-549946 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:57:33.277670  507792 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:57:33.305274  507792 machine.go:94] provisionDockerMachine start ...
	I1227 20:57:33.305370  507792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:57:33.328000  507792 main.go:144] libmachine: Using SSH client type: native
	I1227 20:57:33.328326  507792 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1227 20:57:33.328335  507792 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:57:33.330282  507792 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 20:57:36.478423  507792 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-549946
	
	I1227 20:57:36.478450  507792 ubuntu.go:182] provisioning hostname "newest-cni-549946"
	I1227 20:57:36.478549  507792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:57:36.497610  507792 main.go:144] libmachine: Using SSH client type: native
	I1227 20:57:36.497937  507792 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1227 20:57:36.497955  507792 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-549946 && echo "newest-cni-549946" | sudo tee /etc/hostname
	I1227 20:57:34.213609  504634 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.509181765s
	I1227 20:57:35.986369  504634 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.281857145s
	I1227 20:57:38.207136  504634 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.50340964s
	I1227 20:57:38.246895  504634 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 20:57:38.264327  504634 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 20:57:38.287825  504634 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 20:57:38.288027  504634 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-542467 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 20:57:38.311393  504634 kubeadm.go:319] [bootstrap-token] Using token: jc0epl.8v8y4i4xv9rir54x
	I1227 20:57:38.314585  504634 out.go:252]   - Configuring RBAC rules ...
	I1227 20:57:38.314727  504634 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 20:57:38.321065  504634 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 20:57:38.329866  504634 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 20:57:38.334094  504634 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 20:57:38.338312  504634 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 20:57:38.344801  504634 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 20:57:38.618285  504634 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 20:57:39.056401  504634 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 20:57:39.617667  504634 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 20:57:39.618774  504634 kubeadm.go:319] 
	I1227 20:57:39.618854  504634 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 20:57:39.618868  504634 kubeadm.go:319] 
	I1227 20:57:39.618945  504634 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 20:57:39.618949  504634 kubeadm.go:319] 
	I1227 20:57:39.618974  504634 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 20:57:39.619033  504634 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 20:57:39.619083  504634 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 20:57:39.619086  504634 kubeadm.go:319] 
	I1227 20:57:39.619139  504634 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 20:57:39.619143  504634 kubeadm.go:319] 
	I1227 20:57:39.619191  504634 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 20:57:39.619194  504634 kubeadm.go:319] 
	I1227 20:57:39.619246  504634 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 20:57:39.619320  504634 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 20:57:39.619395  504634 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 20:57:39.619405  504634 kubeadm.go:319] 
	I1227 20:57:39.619488  504634 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 20:57:39.619564  504634 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 20:57:39.619568  504634 kubeadm.go:319] 
	I1227 20:57:39.619651  504634 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jc0epl.8v8y4i4xv9rir54x \
	I1227 20:57:39.619754  504634 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ff29328d1e0d612c7979c16c69d6042f5f31e931d111cc12c8320ed4e4ab5152 \
	I1227 20:57:39.619773  504634 kubeadm.go:319] 	--control-plane 
	I1227 20:57:39.619778  504634 kubeadm.go:319] 
	I1227 20:57:39.619862  504634 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 20:57:39.619866  504634 kubeadm.go:319] 
	I1227 20:57:39.620227  504634 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jc0epl.8v8y4i4xv9rir54x \
	I1227 20:57:39.620337  504634 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ff29328d1e0d612c7979c16c69d6042f5f31e931d111cc12c8320ed4e4ab5152 
	I1227 20:57:39.624593  504634 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 20:57:39.625006  504634 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:57:39.625113  504634 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:57:39.625128  504634 cni.go:84] Creating CNI manager for ""
	I1227 20:57:39.625136  504634 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:57:39.628426  504634 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 20:57:36.643816  507792 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-549946
	
	I1227 20:57:36.643898  507792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:57:36.661276  507792 main.go:144] libmachine: Using SSH client type: native
	I1227 20:57:36.661636  507792 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1227 20:57:36.661659  507792 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-549946' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-549946/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-549946' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:57:36.814171  507792 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:57:36.814203  507792 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:57:36.814222  507792 ubuntu.go:190] setting up certificates
	I1227 20:57:36.814239  507792 provision.go:84] configureAuth start
	I1227 20:57:36.814298  507792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-549946
	I1227 20:57:36.836930  507792 provision.go:143] copyHostCerts
	I1227 20:57:36.836994  507792 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:57:36.837004  507792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:57:36.837074  507792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:57:36.837176  507792 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:57:36.837182  507792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:57:36.837207  507792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:57:36.837263  507792 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:57:36.837268  507792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:57:36.837292  507792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:57:36.837345  507792 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.newest-cni-549946 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-549946]
	I1227 20:57:37.033895  507792 provision.go:177] copyRemoteCerts
	I1227 20:57:37.033991  507792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:57:37.034044  507792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:57:37.057571  507792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:57:37.161244  507792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:57:37.180204  507792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:57:37.203074  507792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:57:37.224790  507792 provision.go:87] duration metric: took 410.529975ms to configureAuth
	I1227 20:57:37.224826  507792 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:57:37.225059  507792 config.go:182] Loaded profile config "newest-cni-549946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:57:37.225175  507792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:57:37.244190  507792 main.go:144] libmachine: Using SSH client type: native
	I1227 20:57:37.244491  507792 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1227 20:57:37.244505  507792 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:57:37.596657  507792 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:57:37.596718  507792 machine.go:97] duration metric: took 4.291418868s to provisionDockerMachine
	I1227 20:57:37.596753  507792 client.go:176] duration metric: took 10.799739141s to LocalClient.Create
	I1227 20:57:37.596800  507792 start.go:167] duration metric: took 10.799822767s to libmachine.API.Create "newest-cni-549946"
	I1227 20:57:37.596847  507792 start.go:293] postStartSetup for "newest-cni-549946" (driver="docker")
	I1227 20:57:37.596883  507792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:57:37.596977  507792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:57:37.597067  507792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:57:37.617528  507792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:57:37.719752  507792 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:57:37.723660  507792 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:57:37.723691  507792 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:57:37.723704  507792 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:57:37.723769  507792 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:57:37.723853  507792 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:57:37.723962  507792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:57:37.732147  507792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:57:37.761959  507792 start.go:296] duration metric: took 165.072915ms for postStartSetup
	I1227 20:57:37.762434  507792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-549946
	I1227 20:57:37.779398  507792 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/config.json ...
	I1227 20:57:37.779667  507792 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:57:37.779712  507792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:57:37.797046  507792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:57:37.896521  507792 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:57:37.902197  507792 start.go:128] duration metric: took 11.109770034s to createHost
	I1227 20:57:37.902226  507792 start.go:83] releasing machines lock for "newest-cni-549946", held for 11.109897276s
	I1227 20:57:37.902314  507792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-549946
	I1227 20:57:37.923508  507792 ssh_runner.go:195] Run: cat /version.json
	I1227 20:57:37.923568  507792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:57:37.923697  507792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:57:37.923784  507792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:57:37.955478  507792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:57:37.956456  507792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:57:38.156135  507792 ssh_runner.go:195] Run: systemctl --version
	I1227 20:57:38.162813  507792 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:57:38.200913  507792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:57:38.207390  507792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:57:38.207456  507792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:57:38.243738  507792 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 20:57:38.243759  507792 start.go:496] detecting cgroup driver to use...
	I1227 20:57:38.243790  507792 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:57:38.243842  507792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:57:38.267376  507792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:57:38.282835  507792 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:57:38.282938  507792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:57:38.305746  507792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:57:38.327496  507792 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:57:38.454357  507792 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:57:38.596136  507792 docker.go:234] disabling docker service ...
	I1227 20:57:38.596228  507792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:57:38.621009  507792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:57:38.641159  507792 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:57:38.814750  507792 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:57:38.969062  507792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:57:38.991215  507792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:57:39.009941  507792 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:57:39.010086  507792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:57:39.023940  507792 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:57:39.024055  507792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:57:39.036994  507792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:57:39.048514  507792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:57:39.059403  507792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:57:39.069559  507792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:57:39.083989  507792 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:57:39.113711  507792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:57:39.128237  507792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:57:39.137603  507792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:57:39.147173  507792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:57:39.330207  507792 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:57:39.540537  507792 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:57:39.540653  507792 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:57:39.547058  507792 start.go:574] Will wait 60s for crictl version
	I1227 20:57:39.547189  507792 ssh_runner.go:195] Run: which crictl
	I1227 20:57:39.550829  507792 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:57:39.576034  507792 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:57:39.576170  507792 ssh_runner.go:195] Run: crio --version
	I1227 20:57:39.613587  507792 ssh_runner.go:195] Run: crio --version
	I1227 20:57:39.664612  507792 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:57:39.667551  507792 cli_runner.go:164] Run: docker network inspect newest-cni-549946 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:57:39.688854  507792 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 20:57:39.692989  507792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:57:39.710426  507792 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1227 20:57:39.713295  507792 kubeadm.go:884] updating cluster {Name:newest-cni-549946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-549946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:57:39.713558  507792 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:57:39.713639  507792 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:57:39.763367  507792 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:57:39.763393  507792 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:57:39.763449  507792 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:57:39.808723  507792 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:57:39.808748  507792 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:57:39.808756  507792 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1227 20:57:39.808908  507792 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-549946 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-549946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:57:39.809044  507792 ssh_runner.go:195] Run: crio config
	I1227 20:57:39.881390  507792 cni.go:84] Creating CNI manager for ""
	I1227 20:57:39.881481  507792 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:57:39.881516  507792 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1227 20:57:39.881572  507792 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-549946 NodeName:newest-cni-549946 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:57:39.881743  507792 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-549946"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:57:39.881834  507792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:57:39.890716  507792 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:57:39.890829  507792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:57:39.899538  507792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 20:57:39.918582  507792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:57:39.936600  507792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1227 20:57:39.952345  507792 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:57:39.956518  507792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:57:39.967476  507792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:57:40.143771  507792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:57:40.179432  507792 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946 for IP: 192.168.85.2
	I1227 20:57:40.179502  507792 certs.go:195] generating shared ca certs ...
	I1227 20:57:40.179541  507792 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:57:40.179720  507792 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:57:40.179805  507792 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:57:40.179831  507792 certs.go:257] generating profile certs ...
	I1227 20:57:40.179903  507792 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/client.key
	I1227 20:57:40.179949  507792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/client.crt with IP's: []
	I1227 20:57:40.431102  507792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/client.crt ...
	I1227 20:57:40.431135  507792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/client.crt: {Name:mk7239b3d1f2caa5732ae5c72b07e302b9c6409e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:57:40.431375  507792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/client.key ...
	I1227 20:57:40.431391  507792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/client.key: {Name:mkc5482ced97046a22f4e8b36ca4af10ca3e056a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:57:40.431544  507792 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/apiserver.key.a445ad92
	I1227 20:57:40.431566  507792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/apiserver.crt.a445ad92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 20:57:40.614378  507792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/apiserver.crt.a445ad92 ...
	I1227 20:57:40.614411  507792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/apiserver.crt.a445ad92: {Name:mkef185143239d89511916748edc967ead96571a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:57:40.614698  507792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/apiserver.key.a445ad92 ...
	I1227 20:57:40.614717  507792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/apiserver.key.a445ad92: {Name:mkc7a643e99bfb2d3daf348c56526fb48a8cbdb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:57:40.614804  507792 certs.go:382] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/apiserver.crt.a445ad92 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/apiserver.crt
	I1227 20:57:40.614893  507792 certs.go:386] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/apiserver.key.a445ad92 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/apiserver.key
	I1227 20:57:40.614958  507792 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/proxy-client.key
	I1227 20:57:40.614980  507792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/proxy-client.crt with IP's: []
	I1227 20:57:40.781966  507792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/proxy-client.crt ...
	I1227 20:57:40.782000  507792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/proxy-client.crt: {Name:mk126bad308c6296b19bca02f1ae4b5dee55a97b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:57:40.782205  507792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/proxy-client.key ...
	I1227 20:57:40.782224  507792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/proxy-client.key: {Name:mk6becfb4b433bbf39dd157247cb79dad3f27697 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:57:40.782442  507792 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:57:40.782506  507792 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:57:40.782524  507792 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:57:40.782583  507792 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:57:40.782630  507792 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:57:40.782665  507792 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:57:40.782731  507792 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:57:40.783303  507792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:57:40.801620  507792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:57:40.821077  507792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:57:40.838570  507792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:57:40.860173  507792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 20:57:40.879953  507792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 20:57:40.898656  507792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:57:40.917025  507792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 20:57:40.935865  507792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:57:40.955692  507792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:57:40.978294  507792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:57:41.001082  507792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:57:41.016661  507792 ssh_runner.go:195] Run: openssl version
	I1227 20:57:41.024148  507792 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:57:41.032368  507792 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:57:41.039463  507792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:57:41.044261  507792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:57:41.044353  507792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:57:41.089289  507792 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:57:41.097280  507792 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2743362.pem /etc/ssl/certs/3ec20f2e.0
	I1227 20:57:41.104928  507792 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:57:41.112509  507792 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:57:41.119994  507792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:57:41.123864  507792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:57:41.123968  507792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:57:41.167868  507792 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:57:41.175016  507792 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 20:57:41.183614  507792 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:57:41.190696  507792 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:57:41.199184  507792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:57:41.202621  507792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:57:41.202719  507792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:57:41.244318  507792 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:57:41.253894  507792 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/274336.pem /etc/ssl/certs/51391683.0
	I1227 20:57:41.261258  507792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:57:41.264699  507792 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:57:41.264749  507792 kubeadm.go:401] StartCluster: {Name:newest-cni-549946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-549946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:57:41.264830  507792 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:57:41.264888  507792 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:57:41.293609  507792 cri.go:96] found id: ""
	I1227 20:57:41.293679  507792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:57:41.302113  507792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 20:57:41.309750  507792 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:57:41.309842  507792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:57:41.317109  507792 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:57:41.317126  507792 kubeadm.go:158] found existing configuration files:
	
	I1227 20:57:41.317174  507792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:57:41.324486  507792 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:57:41.324562  507792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:57:41.331562  507792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:57:41.338892  507792 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:57:41.338970  507792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:57:41.346440  507792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:57:41.353610  507792 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:57:41.353697  507792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:57:41.361038  507792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:57:41.368043  507792 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:57:41.368143  507792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:57:41.375049  507792 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:57:41.428734  507792 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:57:41.429265  507792 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:57:41.511748  507792 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:57:41.511903  507792 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 20:57:41.511975  507792 kubeadm.go:319] OS: Linux
	I1227 20:57:41.512055  507792 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:57:41.512167  507792 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 20:57:41.512230  507792 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:57:41.512286  507792 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:57:41.512336  507792 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:57:41.512391  507792 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:57:41.512440  507792 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:57:41.512497  507792 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:57:41.512547  507792 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 20:57:41.584386  507792 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:57:41.584540  507792 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:57:41.584678  507792 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:57:41.607777  507792 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 20:57:39.631262  504634 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 20:57:39.636861  504634 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 20:57:39.636884  504634 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 20:57:39.662928  504634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 20:57:40.119405  504634 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 20:57:40.119537  504634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:57:40.119653  504634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-542467 minikube.k8s.io/updated_at=2025_12_27T20_57_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562 minikube.k8s.io/name=no-preload-542467 minikube.k8s.io/primary=true
	I1227 20:57:40.539010  504634 ops.go:34] apiserver oom_adj: -16
	I1227 20:57:40.539118  504634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:57:41.039867  504634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:57:41.539252  504634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:57:42.039304  504634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:57:42.539729  504634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:57:43.039306  504634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:57:43.539564  504634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:57:43.742714  504634 kubeadm.go:1114] duration metric: took 3.623223017s to wait for elevateKubeSystemPrivileges
	I1227 20:57:43.742747  504634 kubeadm.go:403] duration metric: took 17.028576163s to StartCluster
	I1227 20:57:43.742769  504634 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:57:43.742829  504634 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:57:43.743610  504634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:57:43.743844  504634 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:57:43.743963  504634 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 20:57:43.744184  504634 config.go:182] Loaded profile config "no-preload-542467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:57:43.744221  504634 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:57:43.744286  504634 addons.go:70] Setting storage-provisioner=true in profile "no-preload-542467"
	I1227 20:57:43.744300  504634 addons.go:239] Setting addon storage-provisioner=true in "no-preload-542467"
	I1227 20:57:43.744324  504634 host.go:66] Checking if "no-preload-542467" exists ...
	I1227 20:57:43.745001  504634 addons.go:70] Setting default-storageclass=true in profile "no-preload-542467"
	I1227 20:57:43.745033  504634 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-542467"
	I1227 20:57:43.745058  504634 cli_runner.go:164] Run: docker container inspect no-preload-542467 --format={{.State.Status}}
	I1227 20:57:43.745330  504634 cli_runner.go:164] Run: docker container inspect no-preload-542467 --format={{.State.Status}}
	I1227 20:57:43.747411  504634 out.go:179] * Verifying Kubernetes components...
	I1227 20:57:43.750310  504634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:57:43.782876  504634 addons.go:239] Setting addon default-storageclass=true in "no-preload-542467"
	I1227 20:57:43.782912  504634 host.go:66] Checking if "no-preload-542467" exists ...
	I1227 20:57:43.783347  504634 cli_runner.go:164] Run: docker container inspect no-preload-542467 --format={{.State.Status}}
	I1227 20:57:43.798962  504634 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:57:43.801799  504634 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:57:43.801820  504634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:57:43.801880  504634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-542467
	I1227 20:57:43.829579  504634 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:57:43.829605  504634 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:57:43.829676  504634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-542467
	I1227 20:57:43.837555  504634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/no-preload-542467/id_rsa Username:docker}
	I1227 20:57:43.871624  504634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/no-preload-542467/id_rsa Username:docker}
	I1227 20:57:41.610872  507792 out.go:252]   - Generating certificates and keys ...
	I1227 20:57:41.611009  507792 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:57:41.611111  507792 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:57:41.740911  507792 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 20:57:41.871610  507792 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 20:57:41.980382  507792 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 20:57:42.020048  507792 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 20:57:42.432501  507792 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 20:57:42.432702  507792 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-549946] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 20:57:42.502021  507792 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 20:57:42.502457  507792 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-549946] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 20:57:42.937673  507792 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 20:57:43.564865  507792 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 20:57:44.119474  507792 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 20:57:44.119544  507792 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:57:44.469174  507792 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:57:44.638920  507792 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:57:45.477202  507792 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:57:45.642826  507792 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:57:45.929067  507792 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:57:45.929164  507792 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:57:45.931024  507792 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:57:44.330240  504634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:57:44.431758  504634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:57:44.497533  504634 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 20:57:44.497641  504634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:57:46.133429  504634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.701624312s)
	I1227 20:57:46.133817  504634 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.636146069s)
	I1227 20:57:46.133897  504634 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.636337162s)
	I1227 20:57:46.133916  504634 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1227 20:57:46.135466  504634 node_ready.go:35] waiting up to 6m0s for node "no-preload-542467" to be "Ready" ...
	I1227 20:57:46.139365  504634 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1227 20:57:45.934383  507792 out.go:252]   - Booting up control plane ...
	I1227 20:57:45.934489  507792 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:57:45.934579  507792 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:57:45.935061  507792 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:57:45.958566  507792 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:57:45.958688  507792 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:57:45.967601  507792 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:57:45.973335  507792 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:57:45.973800  507792 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:57:46.181961  507792 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:57:46.182082  507792 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 20:57:46.142219  504634 addons.go:530] duration metric: took 2.397983822s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1227 20:57:46.637969  504634 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-542467" context rescaled to 1 replicas
	W1227 20:57:48.139745  504634 node_ready.go:57] node "no-preload-542467" has "Ready":"False" status (will retry)
	I1227 20:57:46.718064  507792 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 535.562615ms
	I1227 20:57:46.723752  507792 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 20:57:46.723861  507792 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1227 20:57:46.724209  507792 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 20:57:46.724297  507792 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 20:57:49.238214  507792 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.514112447s
	I1227 20:57:51.038713  507792 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.314894692s
	I1227 20:57:52.725852  507792 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001900708s
	I1227 20:57:52.758741  507792 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 20:57:52.774076  507792 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 20:57:52.789030  507792 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 20:57:52.789247  507792 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-549946 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 20:57:52.802999  507792 kubeadm.go:319] [bootstrap-token] Using token: c8mjkg.epgltg02es1ckihh
	W1227 20:57:50.638353  504634 node_ready.go:57] node "no-preload-542467" has "Ready":"False" status (will retry)
	W1227 20:57:52.638618  504634 node_ready.go:57] node "no-preload-542467" has "Ready":"False" status (will retry)
	I1227 20:57:52.806142  507792 out.go:252]   - Configuring RBAC rules ...
	I1227 20:57:52.806271  507792 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 20:57:52.810271  507792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 20:57:52.819250  507792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 20:57:52.823179  507792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 20:57:52.828916  507792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 20:57:52.834122  507792 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 20:57:53.134407  507792 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 20:57:53.561747  507792 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 20:57:54.135555  507792 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 20:57:54.137311  507792 kubeadm.go:319] 
	I1227 20:57:54.137382  507792 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 20:57:54.137388  507792 kubeadm.go:319] 
	I1227 20:57:54.137490  507792 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 20:57:54.137495  507792 kubeadm.go:319] 
	I1227 20:57:54.137520  507792 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 20:57:54.137579  507792 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 20:57:54.137629  507792 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 20:57:54.137633  507792 kubeadm.go:319] 
	I1227 20:57:54.137687  507792 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 20:57:54.137691  507792 kubeadm.go:319] 
	I1227 20:57:54.137738  507792 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 20:57:54.137743  507792 kubeadm.go:319] 
	I1227 20:57:54.137794  507792 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 20:57:54.137869  507792 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 20:57:54.137940  507792 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 20:57:54.137945  507792 kubeadm.go:319] 
	I1227 20:57:54.138028  507792 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 20:57:54.138105  507792 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 20:57:54.138113  507792 kubeadm.go:319] 
	I1227 20:57:54.138196  507792 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token c8mjkg.epgltg02es1ckihh \
	I1227 20:57:54.138299  507792 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ff29328d1e0d612c7979c16c69d6042f5f31e931d111cc12c8320ed4e4ab5152 \
	I1227 20:57:54.138319  507792 kubeadm.go:319] 	--control-plane 
	I1227 20:57:54.138323  507792 kubeadm.go:319] 
	I1227 20:57:54.138408  507792 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 20:57:54.138412  507792 kubeadm.go:319] 
	I1227 20:57:54.138494  507792 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token c8mjkg.epgltg02es1ckihh \
	I1227 20:57:54.138604  507792 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ff29328d1e0d612c7979c16c69d6042f5f31e931d111cc12c8320ed4e4ab5152 
	I1227 20:57:54.143507  507792 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 20:57:54.143945  507792 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:57:54.144068  507792 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:57:54.144092  507792 cni.go:84] Creating CNI manager for ""
	I1227 20:57:54.144104  507792 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:57:54.147269  507792 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 20:57:54.150207  507792 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 20:57:54.153983  507792 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 20:57:54.154006  507792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 20:57:54.167815  507792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 20:57:54.511111  507792 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 20:57:54.511233  507792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:57:54.511301  507792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-549946 minikube.k8s.io/updated_at=2025_12_27T20_57_54_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562 minikube.k8s.io/name=newest-cni-549946 minikube.k8s.io/primary=true
	I1227 20:57:54.764756  507792 ops.go:34] apiserver oom_adj: -16
	I1227 20:57:54.764859  507792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:57:55.265014  507792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:57:55.764976  507792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:57:56.265047  507792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:57:56.765644  507792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:57:57.265500  507792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:57:57.764986  507792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:57:58.265680  507792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:57:58.765907  507792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:57:58.871992  507792 kubeadm.go:1114] duration metric: took 4.360804144s to wait for elevateKubeSystemPrivileges
	I1227 20:57:58.872023  507792 kubeadm.go:403] duration metric: took 17.60727727s to StartCluster
	I1227 20:57:58.872041  507792 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:57:58.872103  507792 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:57:58.873010  507792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:57:58.873243  507792 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:57:58.873358  507792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 20:57:58.873638  507792 config.go:182] Loaded profile config "newest-cni-549946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:57:58.873685  507792 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:57:58.873748  507792 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-549946"
	I1227 20:57:58.873761  507792 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-549946"
	I1227 20:57:58.873786  507792 host.go:66] Checking if "newest-cni-549946" exists ...
	I1227 20:57:58.874281  507792 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:57:58.874452  507792 addons.go:70] Setting default-storageclass=true in profile "newest-cni-549946"
	I1227 20:57:58.874477  507792 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-549946"
	I1227 20:57:58.874738  507792 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:57:58.878393  507792 out.go:179] * Verifying Kubernetes components...
	I1227 20:57:58.885645  507792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:57:58.917875  507792 addons.go:239] Setting addon default-storageclass=true in "newest-cni-549946"
	I1227 20:57:58.917917  507792 host.go:66] Checking if "newest-cni-549946" exists ...
	I1227 20:57:58.918658  507792 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:57:58.925576  507792 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1227 20:57:55.139543  504634 node_ready.go:57] node "no-preload-542467" has "Ready":"False" status (will retry)
	W1227 20:57:57.638904  504634 node_ready.go:57] node "no-preload-542467" has "Ready":"False" status (will retry)
	I1227 20:57:58.931049  507792 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:57:58.931075  507792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:57:58.931145  507792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:57:58.949652  507792 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:57:58.949674  507792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:57:58.949735  507792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:57:58.984883  507792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:57:58.998513  507792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:57:59.265301  507792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 20:57:59.265419  507792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:57:59.270241  507792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:57:59.290853  507792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:57:59.867119  507792 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1227 20:57:59.867209  507792 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:57:59.867350  507792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:58:00.400630  507792 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-549946" context rescaled to 1 replicas
	I1227 20:58:00.674982  507792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.384094097s)
	I1227 20:58:00.675238  507792 api_server.go:72] duration metric: took 1.801963208s to wait for apiserver process to appear ...
	I1227 20:58:00.675251  507792 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:58:00.675277  507792 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 20:58:00.678193  507792 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1227 20:58:00.681855  507792 addons.go:530] duration metric: took 1.808165082s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1227 20:58:00.694316  507792 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 20:58:00.695756  507792 api_server.go:141] control plane version: v1.35.0
	I1227 20:58:00.695784  507792 api_server.go:131] duration metric: took 20.525995ms to wait for apiserver health ...
	I1227 20:58:00.695793  507792 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:58:00.699763  507792 system_pods.go:59] 8 kube-system pods found
	I1227 20:58:00.699798  507792 system_pods.go:61] "coredns-7d764666f9-lwqng" [fad8ca65-36d9-4617-8bc9-d4c9def1d5b5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 20:58:00.699826  507792 system_pods.go:61] "etcd-newest-cni-549946" [a5d0bbff-5553-4cc5-ab87-057dbf70fa61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:58:00.699842  507792 system_pods.go:61] "kindnet-x98wp" [344e609e-29a5-476e-9578-0ac5e389ff93] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:58:00.699850  507792 system_pods.go:61] "kube-apiserver-newest-cni-549946" [dd80588f-c85b-4a1b-a933-c2e2a987d7ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:58:00.699860  507792 system_pods.go:61] "kube-controller-manager-newest-cni-549946" [9cd183d8-c947-4b6e-a4cd-3603c51d4909] Running
	I1227 20:58:00.699865  507792 system_pods.go:61] "kube-proxy-j8h9m" [e72d123e-acc5-453f-b934-82214364e93d] Running
	I1227 20:58:00.699872  507792 system_pods.go:61] "kube-scheduler-newest-cni-549946" [ac00b2bc-7be9-46a6-8025-45f27e7dfebc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:58:00.699881  507792 system_pods.go:61] "storage-provisioner" [7b4a0a3b-3bad-4818-8f46-2b25602b28c3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 20:58:00.699899  507792 system_pods.go:74] duration metric: took 4.088406ms to wait for pod list to return data ...
	I1227 20:58:00.699915  507792 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:58:00.703017  507792 default_sa.go:45] found service account: "default"
	I1227 20:58:00.703042  507792 default_sa.go:55] duration metric: took 3.120481ms for default service account to be created ...
	I1227 20:58:00.703054  507792 kubeadm.go:587] duration metric: took 1.829780837s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 20:58:00.703100  507792 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:58:00.706494  507792 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:58:00.706524  507792 node_conditions.go:123] node cpu capacity is 2
	I1227 20:58:00.706537  507792 node_conditions.go:105] duration metric: took 3.431857ms to run NodePressure ...
	I1227 20:58:00.706576  507792 start.go:242] waiting for startup goroutines ...
	I1227 20:58:00.706591  507792 start.go:247] waiting for cluster config update ...
	I1227 20:58:00.706603  507792 start.go:256] writing updated cluster config ...
	I1227 20:58:00.706883  507792 ssh_runner.go:195] Run: rm -f paused
	I1227 20:58:00.821585  507792 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 20:58:00.824945  507792 out.go:203] 
	W1227 20:58:00.828004  507792 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 20:58:00.830969  507792 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:58:00.834143  507792 out.go:179] * Done! kubectl is now configured to use "newest-cni-549946" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:57:47 newest-cni-549946 crio[837]: time="2025-12-27T20:57:47.448774742Z" level=info msg="Created container a1f0623ef6e7e0ce3a25ad6076bc6eec281c19491bad5480c3604d787a907087: kube-system/etcd-newest-cni-549946/etcd" id=89975f33-9564-4509-8853-1eb0c255eb6c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:57:47 newest-cni-549946 crio[837]: time="2025-12-27T20:57:47.450646675Z" level=info msg="Starting container: a1f0623ef6e7e0ce3a25ad6076bc6eec281c19491bad5480c3604d787a907087" id=b4e3f0ea-2f2a-4dca-aff7-90b1ca47ca0c name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:57:47 newest-cni-549946 crio[837]: time="2025-12-27T20:57:47.469917442Z" level=info msg="Started container" PID=1228 containerID=a1f0623ef6e7e0ce3a25ad6076bc6eec281c19491bad5480c3604d787a907087 description=kube-system/etcd-newest-cni-549946/etcd id=b4e3f0ea-2f2a-4dca-aff7-90b1ca47ca0c name=/runtime.v1.RuntimeService/StartContainer sandboxID=0dd05e75cb84c1a46d029cc216deecf01b5f5e7b332fad8b3f96a90fc97a990f
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.494419Z" level=info msg="Running pod sandbox: kube-system/kindnet-x98wp/POD" id=3becb307-5b14-4301-a526-5cbf1daae64f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.494500779Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.511366739Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=3becb307-5b14-4301-a526-5cbf1daae64f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.521176112Z" level=info msg="Ran pod sandbox 7bfccf98c6a4ef6698e5552f4662311e53b4057a6d081801f773b03c5e3f4a9b with infra container: kube-system/kindnet-x98wp/POD" id=3becb307-5b14-4301-a526-5cbf1daae64f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.523366895Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=2575d29b-eadc-4847-9b05-a5366c427c10 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.523531748Z" level=info msg="Image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 not found" id=2575d29b-eadc-4847-9b05-a5366c427c10 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.523591431Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 found" id=2575d29b-eadc-4847-9b05-a5366c427c10 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.52489446Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=6dd578af-1253-4e79-9c15-3049cda6de9b name=/runtime.v1.ImageService/PullImage
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.528033115Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.568918144Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-j8h9m/POD" id=09856f18-9370-4320-843c-5c7200843384 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.568983693Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.576268664Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=09856f18-9370-4320-843c-5c7200843384 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.580457153Z" level=info msg="Ran pod sandbox 78226897ae24190f2300e7edefea2d27c006063ad2e7d32bbf3d234a4882d27b with infra container: kube-system/kube-proxy-j8h9m/POD" id=09856f18-9370-4320-843c-5c7200843384 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.581799836Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=48080bc0-534d-4226-9d3e-d0528d50ffa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.587262103Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=81ba281d-3e9b-4eed-a2a7-bd480199e9f3 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.596502073Z" level=info msg="Creating container: kube-system/kube-proxy-j8h9m/kube-proxy" id=532e634e-65b9-4f49-8d55-7dc76b865678 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.596633671Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.619976517Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.624437499Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.654824017Z" level=info msg="Created container fc58969b1d8cce8dedb02b81a4083653777ccf7db61520d6a2d8a438895c709a: kube-system/kube-proxy-j8h9m/kube-proxy" id=532e634e-65b9-4f49-8d55-7dc76b865678 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.660492416Z" level=info msg="Starting container: fc58969b1d8cce8dedb02b81a4083653777ccf7db61520d6a2d8a438895c709a" id=e0148f09-3fd1-4657-b0e1-3a29d4abb4ff name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:57:59 newest-cni-549946 crio[837]: time="2025-12-27T20:57:59.663827825Z" level=info msg="Started container" PID=1482 containerID=fc58969b1d8cce8dedb02b81a4083653777ccf7db61520d6a2d8a438895c709a description=kube-system/kube-proxy-j8h9m/kube-proxy id=e0148f09-3fd1-4657-b0e1-3a29d4abb4ff name=/runtime.v1.RuntimeService/StartContainer sandboxID=78226897ae24190f2300e7edefea2d27c006063ad2e7d32bbf3d234a4882d27b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	fc58969b1d8cc       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   2 seconds ago       Running             kube-proxy                0                   78226897ae241       kube-proxy-j8h9m                            kube-system
	5c2c5b0120de1       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   15 seconds ago      Running             kube-apiserver            0                   fa3bf29c4ebc1       kube-apiserver-newest-cni-549946            kube-system
	4ba8b602f3c54       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   15 seconds ago      Running             kube-controller-manager   0                   a5b271203a3c6       kube-controller-manager-newest-cni-549946   kube-system
	981d853796214       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   15 seconds ago      Running             kube-scheduler            0                   83f03e9d6b884       kube-scheduler-newest-cni-549946            kube-system
	a1f0623ef6e7e       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   15 seconds ago      Running             etcd                      0                   0dd05e75cb84c       etcd-newest-cni-549946                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-549946
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-549946
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=newest-cni-549946
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_57_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:57:51 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-549946
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:57:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:57:53 +0000   Sat, 27 Dec 2025 20:57:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:57:53 +0000   Sat, 27 Dec 2025 20:57:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:57:53 +0000   Sat, 27 Dec 2025 20:57:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 27 Dec 2025 20:57:53 +0000   Sat, 27 Dec 2025 20:57:48 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-549946
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                6b7382ae-8399-40ff-bb99-a6dfaed9059c
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-549946                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-x98wp                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-549946             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-549946    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-j8h9m                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-549946             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  5s    node-controller  Node newest-cni-549946 event: Registered Node newest-cni-549946 in Controller
	
	
	==> dmesg <==
	[Dec27 20:25] overlayfs: idmapped layers are currently not supported
	[ +35.447549] overlayfs: idmapped layers are currently not supported
	[Dec27 20:26] overlayfs: idmapped layers are currently not supported
	[Dec27 20:27] overlayfs: idmapped layers are currently not supported
	[  +6.770645] overlayfs: idmapped layers are currently not supported
	[Dec27 20:28] overlayfs: idmapped layers are currently not supported
	[ +25.872751] overlayfs: idmapped layers are currently not supported
	[Dec27 20:29] overlayfs: idmapped layers are currently not supported
	[ +32.997137] overlayfs: idmapped layers are currently not supported
	[Dec27 20:31] overlayfs: idmapped layers are currently not supported
	[Dec27 20:33] overlayfs: idmapped layers are currently not supported
	[ +33.772475] overlayfs: idmapped layers are currently not supported
	[Dec27 20:39] overlayfs: idmapped layers are currently not supported
	[Dec27 20:40] overlayfs: idmapped layers are currently not supported
	[Dec27 20:44] overlayfs: idmapped layers are currently not supported
	[Dec27 20:45] overlayfs: idmapped layers are currently not supported
	[Dec27 20:49] overlayfs: idmapped layers are currently not supported
	[Dec27 20:50] overlayfs: idmapped layers are currently not supported
	[Dec27 20:51] overlayfs: idmapped layers are currently not supported
	[Dec27 20:52] overlayfs: idmapped layers are currently not supported
	[Dec27 20:53] overlayfs: idmapped layers are currently not supported
	[Dec27 20:55] overlayfs: idmapped layers are currently not supported
	[ +57.272039] overlayfs: idmapped layers are currently not supported
	[Dec27 20:57] overlayfs: idmapped layers are currently not supported
	[ +34.093681] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a1f0623ef6e7e0ce3a25ad6076bc6eec281c19491bad5480c3604d787a907087] <==
	{"level":"info","ts":"2025-12-27T20:57:47.647771Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T20:57:47.685523Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T20:57:47.685669Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T20:57:47.685768Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-27T20:57:47.685829Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:57:47.685869Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:57:47.689495Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T20:57:47.689583Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:57:47.689627Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-27T20:57:47.689671Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T20:57:47.690897Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:newest-cni-549946 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:57:47.690965Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:57:47.691163Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:57:47.734743Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:57:47.759055Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:57:47.760308Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:57:47.760418Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:57:47.760490Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:57:47.737182Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:57:47.761491Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T20:57:47.761595Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T20:57:47.803631Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:57:47.809177Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-27T20:57:47.837823Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:57:47.838016Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:58:02 up  2:40,  0 user,  load average: 4.21, 2.13, 1.90
	Linux newest-cni-549946 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [5c2c5b0120de18041aa643290cff9f352ec5fb4d9d4590f14044acc03b44b438] <==
	I1227 20:57:51.030734       1 aggregator.go:187] initial CRD sync complete...
	I1227 20:57:51.030836       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 20:57:51.030878       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:57:51.030925       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:57:51.058811       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:57:51.119688       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:57:51.124560       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 20:57:51.131389       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:57:51.729373       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 20:57:51.734400       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 20:57:51.734427       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:57:52.414847       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:57:52.465084       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:57:52.531782       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 20:57:52.540874       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1227 20:57:52.542015       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:57:52.546904       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:57:52.987398       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:57:53.542794       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:57:53.560056       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 20:57:53.571506       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 20:57:58.442534       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:57:58.447565       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:57:58.589100       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:57:59.021441       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [4ba8b602f3c54e21944632bca4fc23dd4f092375b42cc3f098d97c74d03d65fa] <==
	I1227 20:57:57.792230       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:57:57.792234       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:57.792549       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:57.793081       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:57.793115       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:57.793126       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:57.793169       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:57.793229       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:57.793282       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:57.793317       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:57.793346       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:57.798986       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:57.799493       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:57.799555       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:57.799769       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:57.799843       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:57.806003       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:57.806048       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:57.811895       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:57.819961       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-549946" podCIDRs=["10.42.0.0/24"]
	I1227 20:57:57.836115       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:57:57.891621       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:57.891648       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:57:57.891654       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:57:57.936860       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [fc58969b1d8cce8dedb02b81a4083653777ccf7db61520d6a2d8a438895c709a] <==
	I1227 20:57:59.746918       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:57:59.871816       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:57:59.973531       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:59.973573       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1227 20:57:59.973645       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:58:00.152209       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:58:00.152288       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:58:00.240296       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:58:00.240715       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:58:00.240734       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:58:00.258801       1 config.go:200] "Starting service config controller"
	I1227 20:58:00.258826       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:58:00.306993       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:58:00.307025       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:58:00.307055       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:58:00.307060       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:58:00.307776       1 config.go:309] "Starting node config controller"
	I1227 20:58:00.307787       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:58:00.307795       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:58:00.560823       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:58:00.607178       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 20:58:00.607217       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [981d85379621463ff21c5e8cc26ae7b39714b9e167b53928476a1bb611189b98] <==
	E1227 20:57:51.052246       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:57:51.052333       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 20:57:51.052408       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:57:51.052506       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 20:57:51.052566       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 20:57:51.053031       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:57:51.061038       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 20:57:51.061141       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 20:57:51.061197       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:57:51.061251       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:57:51.061385       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 20:57:51.061577       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:57:51.061689       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:57:51.061824       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 20:57:51.061927       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:57:51.917644       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 20:57:51.923460       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:57:52.005011       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:57:52.021224       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 20:57:52.072743       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:57:52.096986       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:57:52.130232       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 20:57:52.145997       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 20:57:52.444769       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	I1227 20:57:54.213064       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:57:54 newest-cni-549946 kubelet[1294]: E1227 20:57:54.568234    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-549946" containerName="kube-apiserver"
	Dec 27 20:57:54 newest-cni-549946 kubelet[1294]: I1227 20:57:54.617079    1294 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-549946" podStartSLOduration=1.617062808 podStartE2EDuration="1.617062808s" podCreationTimestamp="2025-12-27 20:57:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:57:54.580442713 +0000 UTC m=+1.215054052" watchObservedRunningTime="2025-12-27 20:57:54.617062808 +0000 UTC m=+1.251674139"
	Dec 27 20:57:54 newest-cni-549946 kubelet[1294]: I1227 20:57:54.677151    1294 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-549946" podStartSLOduration=1.677134636 podStartE2EDuration="1.677134636s" podCreationTimestamp="2025-12-27 20:57:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:57:54.617370771 +0000 UTC m=+1.251982135" watchObservedRunningTime="2025-12-27 20:57:54.677134636 +0000 UTC m=+1.311745967"
	Dec 27 20:57:55 newest-cni-549946 kubelet[1294]: E1227 20:57:55.539657    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-549946" containerName="etcd"
	Dec 27 20:57:55 newest-cni-549946 kubelet[1294]: E1227 20:57:55.539951    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-549946" containerName="kube-apiserver"
	Dec 27 20:57:55 newest-cni-549946 kubelet[1294]: E1227 20:57:55.540142    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-549946" containerName="kube-scheduler"
	Dec 27 20:57:56 newest-cni-549946 kubelet[1294]: E1227 20:57:56.542118    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-549946" containerName="kube-scheduler"
	Dec 27 20:57:56 newest-cni-549946 kubelet[1294]: E1227 20:57:56.542923    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-549946" containerName="etcd"
	Dec 27 20:57:57 newest-cni-549946 kubelet[1294]: I1227 20:57:57.841208    1294 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 27 20:57:57 newest-cni-549946 kubelet[1294]: I1227 20:57:57.843272    1294 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 27 20:57:58 newest-cni-549946 kubelet[1294]: E1227 20:57:58.964254    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-549946" containerName="kube-scheduler"
	Dec 27 20:57:59 newest-cni-549946 kubelet[1294]: I1227 20:57:59.230939    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vbss\" (UniqueName: \"kubernetes.io/projected/344e609e-29a5-476e-9578-0ac5e389ff93-kube-api-access-6vbss\") pod \"kindnet-x98wp\" (UID: \"344e609e-29a5-476e-9578-0ac5e389ff93\") " pod="kube-system/kindnet-x98wp"
	Dec 27 20:57:59 newest-cni-549946 kubelet[1294]: I1227 20:57:59.230994    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/344e609e-29a5-476e-9578-0ac5e389ff93-lib-modules\") pod \"kindnet-x98wp\" (UID: \"344e609e-29a5-476e-9578-0ac5e389ff93\") " pod="kube-system/kindnet-x98wp"
	Dec 27 20:57:59 newest-cni-549946 kubelet[1294]: I1227 20:57:59.231037    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/344e609e-29a5-476e-9578-0ac5e389ff93-xtables-lock\") pod \"kindnet-x98wp\" (UID: \"344e609e-29a5-476e-9578-0ac5e389ff93\") " pod="kube-system/kindnet-x98wp"
	Dec 27 20:57:59 newest-cni-549946 kubelet[1294]: I1227 20:57:59.231058    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/344e609e-29a5-476e-9578-0ac5e389ff93-cni-cfg\") pod \"kindnet-x98wp\" (UID: \"344e609e-29a5-476e-9578-0ac5e389ff93\") " pod="kube-system/kindnet-x98wp"
	Dec 27 20:57:59 newest-cni-549946 kubelet[1294]: I1227 20:57:59.332020    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e72d123e-acc5-453f-b934-82214364e93d-lib-modules\") pod \"kube-proxy-j8h9m\" (UID: \"e72d123e-acc5-453f-b934-82214364e93d\") " pod="kube-system/kube-proxy-j8h9m"
	Dec 27 20:57:59 newest-cni-549946 kubelet[1294]: I1227 20:57:59.332091    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e72d123e-acc5-453f-b934-82214364e93d-xtables-lock\") pod \"kube-proxy-j8h9m\" (UID: \"e72d123e-acc5-453f-b934-82214364e93d\") " pod="kube-system/kube-proxy-j8h9m"
	Dec 27 20:57:59 newest-cni-549946 kubelet[1294]: I1227 20:57:59.332114    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnwvb\" (UniqueName: \"kubernetes.io/projected/e72d123e-acc5-453f-b934-82214364e93d-kube-api-access-xnwvb\") pod \"kube-proxy-j8h9m\" (UID: \"e72d123e-acc5-453f-b934-82214364e93d\") " pod="kube-system/kube-proxy-j8h9m"
	Dec 27 20:57:59 newest-cni-549946 kubelet[1294]: I1227 20:57:59.332260    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e72d123e-acc5-453f-b934-82214364e93d-kube-proxy\") pod \"kube-proxy-j8h9m\" (UID: \"e72d123e-acc5-453f-b934-82214364e93d\") " pod="kube-system/kube-proxy-j8h9m"
	Dec 27 20:57:59 newest-cni-549946 kubelet[1294]: I1227 20:57:59.400027    1294 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 27 20:57:59 newest-cni-549946 kubelet[1294]: W1227 20:57:59.519270    1294 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522/crio-7bfccf98c6a4ef6698e5552f4662311e53b4057a6d081801f773b03c5e3f4a9b WatchSource:0}: Error finding container 7bfccf98c6a4ef6698e5552f4662311e53b4057a6d081801f773b03c5e3f4a9b: Status 404 returned error can't find the container with id 7bfccf98c6a4ef6698e5552f4662311e53b4057a6d081801f773b03c5e3f4a9b
	Dec 27 20:57:59 newest-cni-549946 kubelet[1294]: W1227 20:57:59.578859    1294 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522/crio-78226897ae24190f2300e7edefea2d27c006063ad2e7d32bbf3d234a4882d27b WatchSource:0}: Error finding container 78226897ae24190f2300e7edefea2d27c006063ad2e7d32bbf3d234a4882d27b: Status 404 returned error can't find the container with id 78226897ae24190f2300e7edefea2d27c006063ad2e7d32bbf3d234a4882d27b
	Dec 27 20:58:00 newest-cni-549946 kubelet[1294]: E1227 20:58:00.205230    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-549946" containerName="etcd"
	Dec 27 20:58:00 newest-cni-549946 kubelet[1294]: E1227 20:58:00.929389    1294 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-549946" containerName="kube-controller-manager"
	Dec 27 20:58:00 newest-cni-549946 kubelet[1294]: I1227 20:58:00.966822    1294 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-j8h9m" podStartSLOduration=1.966805138 podStartE2EDuration="1.966805138s" podCreationTimestamp="2025-12-27 20:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:58:00.631648722 +0000 UTC m=+7.266260053" watchObservedRunningTime="2025-12-27 20:58:00.966805138 +0000 UTC m=+7.601416469"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-549946 -n newest-cni-549946
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-549946 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-lwqng storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-549946 describe pod coredns-7d764666f9-lwqng storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-549946 describe pod coredns-7d764666f9-lwqng storage-provisioner: exit status 1 (124.605855ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-lwqng" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-549946 describe pod coredns-7d764666f9-lwqng storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-542467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1227 20:58:13.386860  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-542467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (368.199805ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:58:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-542467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-542467 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-542467 describe deploy/metrics-server -n kube-system: exit status 1 (149.037588ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-542467 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-542467
helpers_test.go:244: (dbg) docker inspect no-preload-542467:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379",
	        "Created": "2025-12-27T20:57:05.049440772Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 504936,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:57:05.121122183Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379/hosts",
	        "LogPath": "/var/lib/docker/containers/dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379/dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379-json.log",
	        "Name": "/no-preload-542467",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-542467:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-542467",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379",
	                "LowerDir": "/var/lib/docker/overlay2/d95d5d7678527e5e563d3be9d484c3b526882d1f0fcfc24ff6bfe885fd5f4f8f-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d95d5d7678527e5e563d3be9d484c3b526882d1f0fcfc24ff6bfe885fd5f4f8f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d95d5d7678527e5e563d3be9d484c3b526882d1f0fcfc24ff6bfe885fd5f4f8f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d95d5d7678527e5e563d3be9d484c3b526882d1f0fcfc24ff6bfe885fd5f4f8f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-542467",
	                "Source": "/var/lib/docker/volumes/no-preload-542467/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-542467",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-542467",
	                "name.minikube.sigs.k8s.io": "no-preload-542467",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dd751c531fe3613f06162b8784ae3e79eaeb4ce253f5c5e9b0ac09b710c25bde",
	            "SandboxKey": "/var/run/docker/netns/dd751c531fe3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-542467": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:e9:c2:05:8e:88",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c1ebbbafc12790a4f974a3988a1224bfe471b8982037ddfef20526083d80bfe8",
	                    "EndpointID": "a8c96b8bbd4c8a20a4012c75ce2d31d5b0c1364980a8d4599a60bec247950d22",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-542467",
	                        "dd7872488d6d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-542467 -n no-preload-542467
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-542467 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-542467 logs -n 25: (1.838939248s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-058924 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:53 UTC │
	│ start   │ -p default-k8s-diff-port-058924 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:53 UTC │ 27 Dec 25 20:54 UTC │
	│ image   │ default-k8s-diff-port-058924 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ pause   │ -p default-k8s-diff-port-058924 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-058924                                                                                                                                                                                                               │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ delete  │ -p default-k8s-diff-port-058924                                                                                                                                                                                                               │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ start   │ -p embed-certs-193865 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-193865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │                     │
	│ stop    │ -p embed-certs-193865 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │ 27 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p embed-certs-193865 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │ 27 Dec 25 20:55 UTC │
	│ start   │ -p embed-certs-193865 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │ 27 Dec 25 20:56 UTC │
	│ image   │ embed-certs-193865 image list --format=json                                                                                                                                                                                                   │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:56 UTC │ 27 Dec 25 20:56 UTC │
	│ pause   │ -p embed-certs-193865 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:56 UTC │                     │
	│ delete  │ -p embed-certs-193865                                                                                                                                                                                                                         │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ delete  │ -p embed-certs-193865                                                                                                                                                                                                                         │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ delete  │ -p disable-driver-mounts-371621                                                                                                                                                                                                               │ disable-driver-mounts-371621 │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ start   │ -p no-preload-542467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-542467            │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:58 UTC │
	│ ssh     │ force-systemd-flag-604544 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-604544    │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ delete  │ -p force-systemd-flag-604544                                                                                                                                                                                                                  │ force-systemd-flag-604544    │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ start   │ -p newest-cni-549946 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-549946            │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:58 UTC │
	│ addons  │ enable metrics-server -p newest-cni-549946 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-549946            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	│ stop    │ -p newest-cni-549946 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-549946            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ addons  │ enable dashboard -p newest-cni-549946 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-549946            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ start   │ -p newest-cni-549946 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-549946            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-542467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-542467            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:58:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:58:05.579669  511686 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:58:05.579898  511686 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:58:05.579928  511686 out.go:374] Setting ErrFile to fd 2...
	I1227 20:58:05.579952  511686 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:58:05.580412  511686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:58:05.581121  511686 out.go:368] Setting JSON to false
	I1227 20:58:05.582445  511686 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9638,"bootTime":1766859448,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:58:05.582520  511686 start.go:143] virtualization:  
	I1227 20:58:05.585842  511686 out.go:179] * [newest-cni-549946] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:58:05.589957  511686 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:58:05.590029  511686 notify.go:221] Checking for updates...
	I1227 20:58:05.596871  511686 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:58:05.599873  511686 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:58:05.602734  511686 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:58:05.605699  511686 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:58:05.608504  511686 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:58:05.611820  511686 config.go:182] Loaded profile config "newest-cni-549946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:58:05.612429  511686 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:58:05.639562  511686 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:58:05.639836  511686 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:58:05.710299  511686 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:58:05.701183266 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:58:05.710404  511686 docker.go:319] overlay module found
	I1227 20:58:05.713567  511686 out.go:179] * Using the docker driver based on existing profile
	I1227 20:58:05.716553  511686 start.go:309] selected driver: docker
	I1227 20:58:05.716573  511686 start.go:928] validating driver "docker" against &{Name:newest-cni-549946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-549946 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:58:05.716701  511686 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:58:05.717439  511686 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:58:05.789602  511686 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:58:05.778418689 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:58:05.789943  511686 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 20:58:05.789967  511686 cni.go:84] Creating CNI manager for ""
	I1227 20:58:05.790020  511686 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:58:05.790051  511686 start.go:353] cluster config:
	{Name:newest-cni-549946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-549946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:58:05.795273  511686 out.go:179] * Starting "newest-cni-549946" primary control-plane node in "newest-cni-549946" cluster
	I1227 20:58:05.798135  511686 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:58:05.801013  511686 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:58:05.803853  511686 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:58:05.803899  511686 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:58:05.803909  511686 cache.go:65] Caching tarball of preloaded images
	I1227 20:58:05.803989  511686 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:58:05.804005  511686 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:58:05.804125  511686 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/config.json ...
	I1227 20:58:05.804333  511686 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:58:05.826712  511686 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:58:05.826731  511686 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:58:05.826752  511686 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:58:05.826783  511686 start.go:360] acquireMachinesLock for newest-cni-549946: {Name:mk8b0ea7d2aaecab8531b3a335f669f52685ec48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:58:05.826839  511686 start.go:364] duration metric: took 34.124µs to acquireMachinesLock for "newest-cni-549946"
	I1227 20:58:05.826864  511686 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:58:05.826869  511686 fix.go:54] fixHost starting: 
	I1227 20:58:05.827164  511686 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:58:05.848340  511686 fix.go:112] recreateIfNeeded on newest-cni-549946: state=Stopped err=<nil>
	W1227 20:58:05.848368  511686 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:58:05.851717  511686 out.go:252] * Restarting existing docker container for "newest-cni-549946" ...
	I1227 20:58:05.851823  511686 cli_runner.go:164] Run: docker start newest-cni-549946
	I1227 20:58:06.186737  511686 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:58:06.221990  511686 kic.go:430] container "newest-cni-549946" state is running.
	I1227 20:58:06.222364  511686 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-549946
	I1227 20:58:06.255265  511686 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/config.json ...
	I1227 20:58:06.255490  511686 machine.go:94] provisionDockerMachine start ...
	I1227 20:58:06.255558  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:06.284882  511686 main.go:144] libmachine: Using SSH client type: native
	I1227 20:58:06.285200  511686 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 20:58:06.285208  511686 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:58:06.285790  511686 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56808->127.0.0.1:33448: read: connection reset by peer
	I1227 20:58:09.429036  511686 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-549946
	
	I1227 20:58:09.429104  511686 ubuntu.go:182] provisioning hostname "newest-cni-549946"
	I1227 20:58:09.429174  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:09.448270  511686 main.go:144] libmachine: Using SSH client type: native
	I1227 20:58:09.448616  511686 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 20:58:09.448627  511686 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-549946 && echo "newest-cni-549946" | sudo tee /etc/hostname
	I1227 20:58:09.599731  511686 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-549946
	
	I1227 20:58:09.599830  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:09.617017  511686 main.go:144] libmachine: Using SSH client type: native
	I1227 20:58:09.617346  511686 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 20:58:09.617365  511686 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-549946' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-549946/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-549946' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:58:09.753625  511686 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:58:09.753650  511686 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:58:09.753675  511686 ubuntu.go:190] setting up certificates
	I1227 20:58:09.753686  511686 provision.go:84] configureAuth start
	I1227 20:58:09.753743  511686 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-549946
	I1227 20:58:09.771756  511686 provision.go:143] copyHostCerts
	I1227 20:58:09.771827  511686 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:58:09.771846  511686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:58:09.771920  511686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:58:09.772025  511686 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:58:09.772034  511686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:58:09.772062  511686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:58:09.772123  511686 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:58:09.772135  511686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:58:09.772162  511686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:58:09.772213  511686 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.newest-cni-549946 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-549946]
	I1227 20:58:09.888037  511686 provision.go:177] copyRemoteCerts
	I1227 20:58:09.888101  511686 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:58:09.888139  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:09.906220  511686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:10.004983  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:58:10.031391  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:58:10.051651  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:58:10.072539  511686 provision.go:87] duration metric: took 318.829454ms to configureAuth
	I1227 20:58:10.072567  511686 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:58:10.072782  511686 config.go:182] Loaded profile config "newest-cni-549946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:58:10.072891  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:10.090743  511686 main.go:144] libmachine: Using SSH client type: native
	I1227 20:58:10.091067  511686 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 20:58:10.091085  511686 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:58:10.425631  511686 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:58:10.425654  511686 machine.go:97] duration metric: took 4.170154292s to provisionDockerMachine
	I1227 20:58:10.425666  511686 start.go:293] postStartSetup for "newest-cni-549946" (driver="docker")
	I1227 20:58:10.425677  511686 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:58:10.425751  511686 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:58:10.425804  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:10.443588  511686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:10.545357  511686 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:58:10.548638  511686 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:58:10.548665  511686 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:58:10.548677  511686 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:58:10.548732  511686 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:58:10.548824  511686 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:58:10.548929  511686 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:58:10.556727  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:58:10.574205  511686 start.go:296] duration metric: took 148.523271ms for postStartSetup
	I1227 20:58:10.574304  511686 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:58:10.574346  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:10.591055  511686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:10.686497  511686 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:58:10.691401  511686 fix.go:56] duration metric: took 4.864525019s for fixHost
	I1227 20:58:10.691428  511686 start.go:83] releasing machines lock for "newest-cni-549946", held for 4.864579443s
	I1227 20:58:10.691506  511686 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-549946
	I1227 20:58:10.709438  511686 ssh_runner.go:195] Run: cat /version.json
	I1227 20:58:10.709546  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:10.709651  511686 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:58:10.709728  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:10.737635  511686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:10.751289  511686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:10.938928  511686 ssh_runner.go:195] Run: systemctl --version
	I1227 20:58:10.945537  511686 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:58:10.981187  511686 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:58:10.986019  511686 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:58:10.986087  511686 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:58:10.993646  511686 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:58:10.993670  511686 start.go:496] detecting cgroup driver to use...
	I1227 20:58:10.993727  511686 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:58:10.993792  511686 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:58:11.009752  511686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:58:11.024113  511686 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:58:11.024178  511686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:58:11.040143  511686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:58:11.053938  511686 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:58:11.175025  511686 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:58:11.301907  511686 docker.go:234] disabling docker service ...
	I1227 20:58:11.301979  511686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:58:11.316307  511686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:58:11.328500  511686 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:58:11.451976  511686 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:58:11.572834  511686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:58:11.587070  511686 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:58:11.600577  511686 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:58:11.600670  511686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:11.609586  511686 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:58:11.609685  511686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:11.618475  511686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:11.627541  511686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:11.635766  511686 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:58:11.643217  511686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:11.651666  511686 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:11.661218  511686 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:11.670033  511686 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:58:11.677781  511686 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:58:11.686320  511686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:58:11.806390  511686 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:58:11.997876  511686 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:58:11.998033  511686 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:58:12.002126  511686 start.go:574] Will wait 60s for crictl version
	I1227 20:58:12.002242  511686 ssh_runner.go:195] Run: which crictl
	I1227 20:58:12.005830  511686 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:58:12.034649  511686 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:58:12.034748  511686 ssh_runner.go:195] Run: crio --version
	I1227 20:58:12.067553  511686 ssh_runner.go:195] Run: crio --version
	I1227 20:58:12.101209  511686 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:58:12.104143  511686 cli_runner.go:164] Run: docker network inspect newest-cni-549946 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:58:12.120402  511686 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 20:58:12.124331  511686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:58:12.136753  511686 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1227 20:58:12.139416  511686 kubeadm.go:884] updating cluster {Name:newest-cni-549946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-549946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:58:12.139565  511686 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:58:12.139643  511686 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:58:12.172957  511686 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:58:12.172983  511686 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:58:12.173035  511686 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:58:12.199879  511686 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:58:12.199902  511686 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:58:12.199911  511686 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1227 20:58:12.200001  511686 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-549946 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-549946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:58:12.200086  511686 ssh_runner.go:195] Run: crio config
	I1227 20:58:12.253231  511686 cni.go:84] Creating CNI manager for ""
	I1227 20:58:12.253254  511686 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:58:12.253275  511686 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1227 20:58:12.253330  511686 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-549946 NodeName:newest-cni-549946 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:58:12.253507  511686 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-549946"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:58:12.253667  511686 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:58:12.261200  511686 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:58:12.261287  511686 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:58:12.268254  511686 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 20:58:12.280255  511686 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:58:12.292764  511686 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1227 20:58:12.305027  511686 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:58:12.308536  511686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:58:12.318005  511686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:58:12.426126  511686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:58:12.441235  511686 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946 for IP: 192.168.85.2
	I1227 20:58:12.441257  511686 certs.go:195] generating shared ca certs ...
	I1227 20:58:12.441274  511686 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:12.441415  511686 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:58:12.441493  511686 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:58:12.441507  511686 certs.go:257] generating profile certs ...
	I1227 20:58:12.441591  511686 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/client.key
	I1227 20:58:12.441668  511686 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/apiserver.key.a445ad92
	I1227 20:58:12.441724  511686 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/proxy-client.key
	I1227 20:58:12.441843  511686 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:58:12.441878  511686 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:58:12.441891  511686 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:58:12.441924  511686 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:58:12.441950  511686 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:58:12.441978  511686 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:58:12.442040  511686 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:58:12.442610  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:58:12.475559  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:58:12.495239  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:58:12.514584  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:58:12.532594  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 20:58:12.549293  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 20:58:12.567471  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:58:12.599194  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 20:58:12.622823  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:58:12.647147  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:58:12.667751  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:58:12.686297  511686 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:58:12.701340  511686 ssh_runner.go:195] Run: openssl version
	I1227 20:58:12.709127  511686 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:58:12.717183  511686 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:58:12.724801  511686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:58:12.734947  511686 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:58:12.735009  511686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:58:12.780174  511686 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:58:12.792965  511686 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:58:12.800693  511686 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:58:12.808613  511686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:58:12.812373  511686 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:58:12.812477  511686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:58:12.853581  511686 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:58:12.860900  511686 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:58:12.868730  511686 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:58:12.876059  511686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:58:12.879872  511686 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:58:12.879951  511686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:58:12.929410  511686 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:58:12.936808  511686 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:58:12.940746  511686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:58:12.995512  511686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:58:13.066599  511686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:58:13.113964  511686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:58:13.221548  511686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:58:13.296541  511686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:58:13.512869  511686 kubeadm.go:401] StartCluster: {Name:newest-cni-549946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-549946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:58:13.512958  511686 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:58:13.513022  511686 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:58:13.656475  511686 cri.go:96] found id: "c0bd9fdc2ab5940221455fa985fb0081e3b368f0836df24c03263c5d6dcce82f"
	I1227 20:58:13.656502  511686 cri.go:96] found id: "7fdca341cdd33eaa6d5b453c79b202a239ceef8d792a7c82a51f302277a8b8ce"
	I1227 20:58:13.656508  511686 cri.go:96] found id: "0f6243dabab6f6acf95adfec716a9b5499ae2e36626a7d7c095f59e8c54137e7"
	I1227 20:58:13.656512  511686 cri.go:96] found id: "d216918be758a4a0fd5e7b8edf4f0ee84f75b9bdcd96a5fd40cbde5de645de91"
	I1227 20:58:13.656516  511686 cri.go:96] found id: ""
	I1227 20:58:13.656567  511686 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:58:13.710865  511686 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:58:13Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:58:13.710953  511686 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:58:13.724408  511686 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:58:13.724429  511686 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:58:13.724489  511686 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:58:13.732931  511686 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:58:13.733467  511686 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-549946" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:58:13.733706  511686 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-272475/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-549946" cluster setting kubeconfig missing "newest-cni-549946" context setting]
	I1227 20:58:13.734122  511686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:13.735809  511686 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:58:13.747032  511686 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1227 20:58:13.747066  511686 kubeadm.go:602] duration metric: took 22.631375ms to restartPrimaryControlPlane
	I1227 20:58:13.747076  511686 kubeadm.go:403] duration metric: took 234.218644ms to StartCluster
	I1227 20:58:13.747096  511686 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:13.747153  511686 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:58:13.747986  511686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:13.748195  511686 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:58:13.748574  511686 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:58:13.748653  511686 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-549946"
	I1227 20:58:13.748669  511686 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-549946"
	W1227 20:58:13.748679  511686 addons.go:248] addon storage-provisioner should already be in state true
	I1227 20:58:13.748702  511686 host.go:66] Checking if "newest-cni-549946" exists ...
	I1227 20:58:13.749246  511686 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:58:13.749900  511686 addons.go:70] Setting dashboard=true in profile "newest-cni-549946"
	I1227 20:58:13.749916  511686 addons.go:239] Setting addon dashboard=true in "newest-cni-549946"
	W1227 20:58:13.749923  511686 addons.go:248] addon dashboard should already be in state true
	I1227 20:58:13.749948  511686 host.go:66] Checking if "newest-cni-549946" exists ...
	I1227 20:58:13.750354  511686 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:58:13.750775  511686 config.go:182] Loaded profile config "newest-cni-549946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:58:13.750936  511686 addons.go:70] Setting default-storageclass=true in profile "newest-cni-549946"
	I1227 20:58:13.750958  511686 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-549946"
	I1227 20:58:13.756044  511686 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:58:13.757119  511686 out.go:179] * Verifying Kubernetes components...
	I1227 20:58:13.762979  511686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:58:13.833534  511686 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:58:13.833652  511686 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 20:58:13.836427  511686 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Dec 27 20:58:00 no-preload-542467 crio[837]: time="2025-12-27T20:58:00.541949106Z" level=info msg="Created container 7a603ae151667b8efd1ed48159df9b0a004a556363c63f0550b776a2a924bc58: kube-system/coredns-7d764666f9-p7xs9/coredns" id=c21043a9-b14d-4885-81d2-c9d63c5b95d8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:58:00 no-preload-542467 crio[837]: time="2025-12-27T20:58:00.548046622Z" level=info msg="Starting container: 7a603ae151667b8efd1ed48159df9b0a004a556363c63f0550b776a2a924bc58" id=dc864c01-d765-44ff-b664-87b1cf8e3270 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:58:00 no-preload-542467 crio[837]: time="2025-12-27T20:58:00.561857478Z" level=info msg="Started container" PID=2439 containerID=7a603ae151667b8efd1ed48159df9b0a004a556363c63f0550b776a2a924bc58 description=kube-system/coredns-7d764666f9-p7xs9/coredns id=dc864c01-d765-44ff-b664-87b1cf8e3270 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e3daf65b195210f54e51dd7ee35cb2a002aa143274a5cb9b9a711fbc5adf51da
	Dec 27 20:58:04 no-preload-542467 crio[837]: time="2025-12-27T20:58:04.189412629Z" level=info msg="Running pod sandbox: default/busybox/POD" id=34ab90fc-f327-4650-b325-662c0dbd9a36 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:58:04 no-preload-542467 crio[837]: time="2025-12-27T20:58:04.189524216Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:58:04 no-preload-542467 crio[837]: time="2025-12-27T20:58:04.1953975Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:edb204f878d69450dabfd94bad2121ad88b7195abd7307c3b15924ef00ed2a37 UID:839817a5-386c-47cf-acc7-77e328ee53be NetNS:/var/run/netns/efeefc4a-fca0-4163-85cd-7240e42d305a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000baf430}] Aliases:map[]}"
	Dec 27 20:58:04 no-preload-542467 crio[837]: time="2025-12-27T20:58:04.195449412Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 27 20:58:04 no-preload-542467 crio[837]: time="2025-12-27T20:58:04.204395671Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:edb204f878d69450dabfd94bad2121ad88b7195abd7307c3b15924ef00ed2a37 UID:839817a5-386c-47cf-acc7-77e328ee53be NetNS:/var/run/netns/efeefc4a-fca0-4163-85cd-7240e42d305a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000baf430}] Aliases:map[]}"
	Dec 27 20:58:04 no-preload-542467 crio[837]: time="2025-12-27T20:58:04.204546846Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 27 20:58:04 no-preload-542467 crio[837]: time="2025-12-27T20:58:04.208305504Z" level=info msg="Ran pod sandbox edb204f878d69450dabfd94bad2121ad88b7195abd7307c3b15924ef00ed2a37 with infra container: default/busybox/POD" id=34ab90fc-f327-4650-b325-662c0dbd9a36 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:58:04 no-preload-542467 crio[837]: time="2025-12-27T20:58:04.209802046Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bf883e63-5fcb-4783-b630-9914089637c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:58:04 no-preload-542467 crio[837]: time="2025-12-27T20:58:04.209921805Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=bf883e63-5fcb-4783-b630-9914089637c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:58:04 no-preload-542467 crio[837]: time="2025-12-27T20:58:04.209968204Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=bf883e63-5fcb-4783-b630-9914089637c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:58:04 no-preload-542467 crio[837]: time="2025-12-27T20:58:04.211093917Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e7cc8a2e-a137-44aa-817c-e2fc140dfcfe name=/runtime.v1.ImageService/PullImage
	Dec 27 20:58:04 no-preload-542467 crio[837]: time="2025-12-27T20:58:04.213191303Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 20:58:06 no-preload-542467 crio[837]: time="2025-12-27T20:58:06.20573789Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=e7cc8a2e-a137-44aa-817c-e2fc140dfcfe name=/runtime.v1.ImageService/PullImage
	Dec 27 20:58:06 no-preload-542467 crio[837]: time="2025-12-27T20:58:06.206737042Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=464947f7-0226-40d7-92c5-a851522da6c9 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:58:06 no-preload-542467 crio[837]: time="2025-12-27T20:58:06.222538162Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=32f479b7-2c9b-4fc3-b0d2-09db13cbc274 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:58:06 no-preload-542467 crio[837]: time="2025-12-27T20:58:06.237744993Z" level=info msg="Creating container: default/busybox/busybox" id=392a4d02-1abf-4930-8159-df0133498eda name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:58:06 no-preload-542467 crio[837]: time="2025-12-27T20:58:06.238037473Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:58:06 no-preload-542467 crio[837]: time="2025-12-27T20:58:06.245508457Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:58:06 no-preload-542467 crio[837]: time="2025-12-27T20:58:06.259155699Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:58:06 no-preload-542467 crio[837]: time="2025-12-27T20:58:06.323045107Z" level=info msg="Created container c2a2ef049ddbdd024ce50ee73e90e62504ab93283d5ed1cc724fac800391577c: default/busybox/busybox" id=392a4d02-1abf-4930-8159-df0133498eda name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:58:06 no-preload-542467 crio[837]: time="2025-12-27T20:58:06.324175193Z" level=info msg="Starting container: c2a2ef049ddbdd024ce50ee73e90e62504ab93283d5ed1cc724fac800391577c" id=895c3ad2-4c66-4eff-a9ef-ef15bd69cb14 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:58:06 no-preload-542467 crio[837]: time="2025-12-27T20:58:06.328069225Z" level=info msg="Started container" PID=2498 containerID=c2a2ef049ddbdd024ce50ee73e90e62504ab93283d5ed1cc724fac800391577c description=default/busybox/busybox id=895c3ad2-4c66-4eff-a9ef-ef15bd69cb14 name=/runtime.v1.RuntimeService/StartContainer sandboxID=edb204f878d69450dabfd94bad2121ad88b7195abd7307c3b15924ef00ed2a37
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c2a2ef049ddbd       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   edb204f878d69       busybox                                     default
	7a603ae151667       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                      14 seconds ago      Running             coredns                   0                   e3daf65b19521       coredns-7d764666f9-p7xs9                    kube-system
	6e3928a9ee276       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      14 seconds ago      Running             storage-provisioner       0                   6865cc97436f2       storage-provisioner                         kube-system
	b73ef32e07402       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    26 seconds ago      Running             kindnet-cni               0                   8bc7dffa6fbda       kindnet-2v4p8                               kube-system
	3fc617711aa41       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                      30 seconds ago      Running             kube-proxy                0                   79dcdaa3be2ef       kube-proxy-7mx96                            kube-system
	b0f57a7ce0496       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                      42 seconds ago      Running             kube-scheduler            0                   e371e8817b1ac       kube-scheduler-no-preload-542467            kube-system
	16e2188fea027       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                      42 seconds ago      Running             kube-controller-manager   0                   614d77cc5f23f       kube-controller-manager-no-preload-542467   kube-system
	2e56d5b6f5c30       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                      42 seconds ago      Running             etcd                      0                   791e5074837ec       etcd-no-preload-542467                      kube-system
	85de6ec89e609       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                      42 seconds ago      Running             kube-apiserver            0                   d70f7a840222f       kube-apiserver-no-preload-542467            kube-system
	
	
	==> coredns [7a603ae151667b8efd1ed48159df9b0a004a556363c63f0550b776a2a924bc58] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:49979 - 20432 "HINFO IN 1220196304710342964.3647539015912196442. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016457069s
	
	
	==> describe nodes <==
	Name:               no-preload-542467
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-542467
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=no-preload-542467
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_57_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:57:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-542467
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:58:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:58:10 +0000   Sat, 27 Dec 2025 20:57:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:58:10 +0000   Sat, 27 Dec 2025 20:57:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:58:10 +0000   Sat, 27 Dec 2025 20:57:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:58:10 +0000   Sat, 27 Dec 2025 20:57:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-542467
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                965c0b17-6aea-4550-9015-e80b58ef7dfe
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-7d764666f9-p7xs9                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     31s
	  kube-system                 etcd-no-preload-542467                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-2v4p8                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-no-preload-542467             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-no-preload-542467    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-7mx96                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-no-preload-542467             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  33s   node-controller  Node no-preload-542467 event: Registered Node no-preload-542467 in Controller
	
	
	==> dmesg <==
	[ +35.447549] overlayfs: idmapped layers are currently not supported
	[Dec27 20:26] overlayfs: idmapped layers are currently not supported
	[Dec27 20:27] overlayfs: idmapped layers are currently not supported
	[  +6.770645] overlayfs: idmapped layers are currently not supported
	[Dec27 20:28] overlayfs: idmapped layers are currently not supported
	[ +25.872751] overlayfs: idmapped layers are currently not supported
	[Dec27 20:29] overlayfs: idmapped layers are currently not supported
	[ +32.997137] overlayfs: idmapped layers are currently not supported
	[Dec27 20:31] overlayfs: idmapped layers are currently not supported
	[Dec27 20:33] overlayfs: idmapped layers are currently not supported
	[ +33.772475] overlayfs: idmapped layers are currently not supported
	[Dec27 20:39] overlayfs: idmapped layers are currently not supported
	[Dec27 20:40] overlayfs: idmapped layers are currently not supported
	[Dec27 20:44] overlayfs: idmapped layers are currently not supported
	[Dec27 20:45] overlayfs: idmapped layers are currently not supported
	[Dec27 20:49] overlayfs: idmapped layers are currently not supported
	[Dec27 20:50] overlayfs: idmapped layers are currently not supported
	[Dec27 20:51] overlayfs: idmapped layers are currently not supported
	[Dec27 20:52] overlayfs: idmapped layers are currently not supported
	[Dec27 20:53] overlayfs: idmapped layers are currently not supported
	[Dec27 20:55] overlayfs: idmapped layers are currently not supported
	[ +57.272039] overlayfs: idmapped layers are currently not supported
	[Dec27 20:57] overlayfs: idmapped layers are currently not supported
	[ +34.093681] overlayfs: idmapped layers are currently not supported
	[Dec27 20:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2e56d5b6f5c30df90b18830002293c21d1b7e4f39fef38eee4f81b0e6a49fb71] <==
	{"level":"info","ts":"2025-12-27T20:57:32.501065Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"ea7e25599daad906","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-12-27T20:57:33.170186Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T20:57:33.171028Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T20:57:33.171119Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-27T20:57:33.172380Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:57:33.172437Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:57:33.174267Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:57:33.174337Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:57:33.174379Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-27T20:57:33.174414Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:57:33.185606Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:57:33.197275Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-542467 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:57:33.214058Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:57:33.247204Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:57:33.260335Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:57:33.271686Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:57:33.272753Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:57:33.271376Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:57:33.276649Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:57:33.276977Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:57:33.277042Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:57:33.277071Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T20:57:33.277119Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T20:57:33.277167Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T20:57:33.281846Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 20:58:15 up  2:40,  0 user,  load average: 3.93, 2.14, 1.90
	Linux no-preload-542467 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b73ef32e074024ee4c667c7f423384e884aa425dc068f758490852fe9c7186fc] <==
	I1227 20:57:49.232338       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:57:49.232665       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 20:57:49.232864       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:57:49.232912       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:57:49.232968       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:57:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:57:49.438020       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:57:49.438114       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:57:49.438156       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:57:49.438848       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 20:57:49.638489       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:57:49.639004       1 metrics.go:72] Registering metrics
	I1227 20:57:49.639097       1 controller.go:711] "Syncing nftables rules"
	I1227 20:57:59.441574       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:57:59.441689       1 main.go:301] handling current node
	I1227 20:58:09.437526       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:58:09.437558       1 main.go:301] handling current node
	
	
	==> kube-apiserver [85de6ec89e6099ad3f46fe8c1dafa3c83301e311c3080b78f744f3ecb2000c74] <==
	E1227 20:57:36.036565       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1227 20:57:36.076742       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1227 20:57:36.102237       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:57:36.107317       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:57:36.107719       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 20:57:36.115145       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:57:36.116048       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 20:57:36.251294       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:57:36.707744       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 20:57:36.715043       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 20:57:36.716424       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:57:37.679963       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:57:37.743892       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:57:37.900740       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:57:37.931353       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 20:57:37.952994       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1227 20:57:37.954698       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:57:37.964554       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:57:39.031320       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:57:39.053701       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 20:57:39.063656       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 20:57:43.930169       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:57:44.035212       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 20:57:44.060363       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:57:44.065322       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [16e2188fea027b4c423bc6b1f45c6b9c0535f3815dad5e6f99c96cc2b3a147aa] <==
	I1227 20:57:42.833669       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:42.833676       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:42.833648       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:42.834819       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 20:57:42.834905       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-542467"
	I1227 20:57:42.834972       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 20:57:42.833656       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:42.833663       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:42.833683       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:42.833688       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:42.847677       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:42.848316       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:42.848327       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:42.848335       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:42.848341       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:42.848348       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:42.848379       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:42.848390       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:42.848395       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:42.891908       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:42.913342       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:42.942817       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:42.942842       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:57:42.942847       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:58:02.838741       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [3fc617711aa417346f55152457d17a34de973835cdf32c455f077c778a237022] <==
	I1227 20:57:44.958899       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:57:45.079485       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:57:45.180542       1 shared_informer.go:377] "Caches are synced"
	I1227 20:57:45.180575       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 20:57:45.180656       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:57:45.299430       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:57:45.299485       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:57:45.315160       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:57:45.316137       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:57:45.316152       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:57:45.323234       1 config.go:200] "Starting service config controller"
	I1227 20:57:45.323253       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:57:45.323268       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:57:45.323272       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:57:45.323298       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:57:45.323302       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:57:45.323925       1 config.go:309] "Starting node config controller"
	I1227 20:57:45.323931       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:57:45.323937       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:57:45.429513       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:57:45.429560       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 20:57:45.429763       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b0f57a7ce0496c42d26e1dcde87656fb68da921f3b8bb7c945e27d6c194f17e8] <==
	E1227 20:57:35.987477       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:57:35.987664       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 20:57:35.987694       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:57:35.987741       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 20:57:35.987914       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 20:57:35.990269       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:57:35.990512       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 20:57:35.990698       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 20:57:35.990769       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:57:35.990882       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:57:36.863751       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 20:57:36.863820       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 20:57:36.908245       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 20:57:36.925471       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 20:57:36.942549       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 20:57:36.977969       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 20:57:37.025739       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 20:57:37.045517       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 20:57:37.047286       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 20:57:37.189699       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 20:57:37.193677       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 20:57:37.204337       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 20:57:37.213071       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 20:57:37.261996       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	I1227 20:57:39.666171       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:57:44 no-preload-542467 kubelet[1964]: I1227 20:57:44.398982    1964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d494c52-2b6e-431e-a66c-7f1e3f28a070-lib-modules\") pod \"kube-proxy-7mx96\" (UID: \"8d494c52-2b6e-431e-a66c-7f1e3f28a070\") " pod="kube-system/kube-proxy-7mx96"
	Dec 27 20:57:44 no-preload-542467 kubelet[1964]: I1227 20:57:44.399013    1964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwgvq\" (UniqueName: \"kubernetes.io/projected/8d494c52-2b6e-431e-a66c-7f1e3f28a070-kube-api-access-hwgvq\") pod \"kube-proxy-7mx96\" (UID: \"8d494c52-2b6e-431e-a66c-7f1e3f28a070\") " pod="kube-system/kube-proxy-7mx96"
	Dec 27 20:57:44 no-preload-542467 kubelet[1964]: I1227 20:57:44.399036    1964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c2c77c3-7d5e-45f4-8eea-f6928cf134f5-xtables-lock\") pod \"kindnet-2v4p8\" (UID: \"9c2c77c3-7d5e-45f4-8eea-f6928cf134f5\") " pod="kube-system/kindnet-2v4p8"
	Dec 27 20:57:44 no-preload-542467 kubelet[1964]: I1227 20:57:44.399056    1964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj5vr\" (UniqueName: \"kubernetes.io/projected/9c2c77c3-7d5e-45f4-8eea-f6928cf134f5-kube-api-access-gj5vr\") pod \"kindnet-2v4p8\" (UID: \"9c2c77c3-7d5e-45f4-8eea-f6928cf134f5\") " pod="kube-system/kindnet-2v4p8"
	Dec 27 20:57:44 no-preload-542467 kubelet[1964]: I1227 20:57:44.399090    1964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8d494c52-2b6e-431e-a66c-7f1e3f28a070-kube-proxy\") pod \"kube-proxy-7mx96\" (UID: \"8d494c52-2b6e-431e-a66c-7f1e3f28a070\") " pod="kube-system/kube-proxy-7mx96"
	Dec 27 20:57:44 no-preload-542467 kubelet[1964]: I1227 20:57:44.525315    1964 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 27 20:57:49 no-preload-542467 kubelet[1964]: I1227 20:57:49.423485    1964 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-7mx96" podStartSLOduration=5.423469082 podStartE2EDuration="5.423469082s" podCreationTimestamp="2025-12-27 20:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:57:45.391649873 +0000 UTC m=+6.473599298" watchObservedRunningTime="2025-12-27 20:57:49.423469082 +0000 UTC m=+10.505418482"
	Dec 27 20:57:52 no-preload-542467 kubelet[1964]: E1227 20:57:52.990927    1964 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-542467" containerName="kube-apiserver"
	Dec 27 20:57:53 no-preload-542467 kubelet[1964]: I1227 20:57:53.010865    1964 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-2v4p8" podStartSLOduration=4.675828245 podStartE2EDuration="9.010847281s" podCreationTimestamp="2025-12-27 20:57:44 +0000 UTC" firstStartedPulling="2025-12-27 20:57:44.636673486 +0000 UTC m=+5.718622885" lastFinishedPulling="2025-12-27 20:57:48.971692521 +0000 UTC m=+10.053641921" observedRunningTime="2025-12-27 20:57:49.425309713 +0000 UTC m=+10.507259112" watchObservedRunningTime="2025-12-27 20:57:53.010847281 +0000 UTC m=+14.092796705"
	Dec 27 20:57:53 no-preload-542467 kubelet[1964]: E1227 20:57:53.678933    1964 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-542467" containerName="kube-scheduler"
	Dec 27 20:57:53 no-preload-542467 kubelet[1964]: E1227 20:57:53.722303    1964 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-542467" containerName="etcd"
	Dec 27 20:57:54 no-preload-542467 kubelet[1964]: E1227 20:57:54.226306    1964 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-542467" containerName="kube-controller-manager"
	Dec 27 20:57:59 no-preload-542467 kubelet[1964]: I1227 20:57:59.986600    1964 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 27 20:58:00 no-preload-542467 kubelet[1964]: I1227 20:58:00.178175    1964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5728b9d-d5dd-4946-971a-543ccae4bbb5-config-volume\") pod \"coredns-7d764666f9-p7xs9\" (UID: \"b5728b9d-d5dd-4946-971a-543ccae4bbb5\") " pod="kube-system/coredns-7d764666f9-p7xs9"
	Dec 27 20:58:00 no-preload-542467 kubelet[1964]: I1227 20:58:00.178452    1964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/20b095bb-fb60-4860-ae08-c05d950bd9ea-tmp\") pod \"storage-provisioner\" (UID: \"20b095bb-fb60-4860-ae08-c05d950bd9ea\") " pod="kube-system/storage-provisioner"
	Dec 27 20:58:00 no-preload-542467 kubelet[1964]: I1227 20:58:00.178609    1964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q5p5\" (UniqueName: \"kubernetes.io/projected/20b095bb-fb60-4860-ae08-c05d950bd9ea-kube-api-access-2q5p5\") pod \"storage-provisioner\" (UID: \"20b095bb-fb60-4860-ae08-c05d950bd9ea\") " pod="kube-system/storage-provisioner"
	Dec 27 20:58:00 no-preload-542467 kubelet[1964]: I1227 20:58:00.178750    1964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ldbn\" (UniqueName: \"kubernetes.io/projected/b5728b9d-d5dd-4946-971a-543ccae4bbb5-kube-api-access-4ldbn\") pod \"coredns-7d764666f9-p7xs9\" (UID: \"b5728b9d-d5dd-4946-971a-543ccae4bbb5\") " pod="kube-system/coredns-7d764666f9-p7xs9"
	Dec 27 20:58:01 no-preload-542467 kubelet[1964]: E1227 20:58:01.448600    1964 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p7xs9" containerName="coredns"
	Dec 27 20:58:01 no-preload-542467 kubelet[1964]: I1227 20:58:01.486404    1964 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.486386677 podStartE2EDuration="15.486386677s" podCreationTimestamp="2025-12-27 20:57:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:58:01.465419864 +0000 UTC m=+22.547369264" watchObservedRunningTime="2025-12-27 20:58:01.486386677 +0000 UTC m=+22.568336093"
	Dec 27 20:58:02 no-preload-542467 kubelet[1964]: E1227 20:58:02.450953    1964 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p7xs9" containerName="coredns"
	Dec 27 20:58:03 no-preload-542467 kubelet[1964]: E1227 20:58:03.453255    1964 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p7xs9" containerName="coredns"
	Dec 27 20:58:03 no-preload-542467 kubelet[1964]: I1227 20:58:03.879032    1964 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-p7xs9" podStartSLOduration=19.879015066 podStartE2EDuration="19.879015066s" podCreationTimestamp="2025-12-27 20:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 20:58:01.48817583 +0000 UTC m=+22.570125230" watchObservedRunningTime="2025-12-27 20:58:03.879015066 +0000 UTC m=+24.960964466"
	Dec 27 20:58:04 no-preload-542467 kubelet[1964]: I1227 20:58:04.022827    1964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6nm2\" (UniqueName: \"kubernetes.io/projected/839817a5-386c-47cf-acc7-77e328ee53be-kube-api-access-j6nm2\") pod \"busybox\" (UID: \"839817a5-386c-47cf-acc7-77e328ee53be\") " pod="default/busybox"
	Dec 27 20:58:04 no-preload-542467 kubelet[1964]: W1227 20:58:04.206432    1964 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379/crio-edb204f878d69450dabfd94bad2121ad88b7195abd7307c3b15924ef00ed2a37 WatchSource:0}: Error finding container edb204f878d69450dabfd94bad2121ad88b7195abd7307c3b15924ef00ed2a37: Status 404 returned error can't find the container with id edb204f878d69450dabfd94bad2121ad88b7195abd7307c3b15924ef00ed2a37
	Dec 27 20:58:06 no-preload-542467 kubelet[1964]: I1227 20:58:06.483616    1964 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.481667993 podStartE2EDuration="3.483591457s" podCreationTimestamp="2025-12-27 20:58:03 +0000 UTC" firstStartedPulling="2025-12-27 20:58:04.210431649 +0000 UTC m=+25.292381049" lastFinishedPulling="2025-12-27 20:58:06.212355014 +0000 UTC m=+27.294304513" observedRunningTime="2025-12-27 20:58:06.482798683 +0000 UTC m=+27.564748099" watchObservedRunningTime="2025-12-27 20:58:06.483591457 +0000 UTC m=+27.565540857"
	
	
	==> storage-provisioner [6e3928a9ee27604160bbc8642493608c1f68af8991000523deeecce2510333cb] <==
	I1227 20:58:00.564913       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 20:58:00.591112       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:58:00.591255       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 20:58:00.594356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:58:00.603205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:58:00.603435       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:58:00.603622       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-542467_a4c09e85-a49e-4bd3-a420-21566ef16d18!
	I1227 20:58:00.604565       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc82333b-f666-454a-923f-92228b1762ed", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-542467_a4c09e85-a49e-4bd3-a420-21566ef16d18 became leader
	W1227 20:58:00.623584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:58:00.649038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:58:00.704740       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-542467_a4c09e85-a49e-4bd3-a420-21566ef16d18!
	W1227 20:58:02.652254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:58:02.657815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:58:04.660345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:58:04.666593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:58:06.673786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:58:06.684160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:58:08.686903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:58:08.690975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:58:10.694686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:58:10.701967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:58:12.705160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:58:12.715487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:58:14.718924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:58:14.723823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-542467 -n no-preload-542467
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-542467 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-549946 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-549946 --alsologtostderr -v=1: exit status 80 (1.912519653s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-549946 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:58:20.062277  514124 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:58:20.062431  514124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:58:20.062452  514124 out.go:374] Setting ErrFile to fd 2...
	I1227 20:58:20.062472  514124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:58:20.062752  514124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:58:20.063031  514124 out.go:368] Setting JSON to false
	I1227 20:58:20.063074  514124 mustload.go:66] Loading cluster: newest-cni-549946
	I1227 20:58:20.063500  514124 config.go:182] Loaded profile config "newest-cni-549946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:58:20.063997  514124 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:58:20.080934  514124 host.go:66] Checking if "newest-cni-549946" exists ...
	I1227 20:58:20.081304  514124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:58:20.151175  514124 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:65 SystemTime:2025-12-27 20:58:20.142209463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:58:20.151819  514124 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22332/minikube-v1.37.0-1766811082-22332-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766811082-22332/minikube-v1.37.0-1766811082-22332-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766811082-22332-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:newest-cni-549946 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 20:58:20.157269  514124 out.go:179] * Pausing node newest-cni-549946 ... 
	I1227 20:58:20.160123  514124 host.go:66] Checking if "newest-cni-549946" exists ...
	I1227 20:58:20.160476  514124 ssh_runner.go:195] Run: systemctl --version
	I1227 20:58:20.160538  514124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:20.178231  514124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:20.275900  514124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:58:20.288600  514124 pause.go:52] kubelet running: true
	I1227 20:58:20.288681  514124 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:58:20.504760  514124 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:58:20.504872  514124 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:58:20.578315  514124 cri.go:96] found id: "24c96d8311b2eede51392a6988d59a67f1e627441610efe3a19a1e6caed81c77"
	I1227 20:58:20.578338  514124 cri.go:96] found id: "5355cd83edb132719f8b40ff6136f563ecb8fefdebeae2c0612a70e25533a2ff"
	I1227 20:58:20.578344  514124 cri.go:96] found id: "c0bd9fdc2ab5940221455fa985fb0081e3b368f0836df24c03263c5d6dcce82f"
	I1227 20:58:20.578347  514124 cri.go:96] found id: "7fdca341cdd33eaa6d5b453c79b202a239ceef8d792a7c82a51f302277a8b8ce"
	I1227 20:58:20.578350  514124 cri.go:96] found id: "0f6243dabab6f6acf95adfec716a9b5499ae2e36626a7d7c095f59e8c54137e7"
	I1227 20:58:20.578354  514124 cri.go:96] found id: "d216918be758a4a0fd5e7b8edf4f0ee84f75b9bdcd96a5fd40cbde5de645de91"
	I1227 20:58:20.578357  514124 cri.go:96] found id: ""
	I1227 20:58:20.578406  514124 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:58:20.589432  514124 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:58:20Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:58:20.877043  514124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:58:20.889568  514124 pause.go:52] kubelet running: false
	I1227 20:58:20.889631  514124 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:58:21.045506  514124 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:58:21.045597  514124 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:58:21.144034  514124 cri.go:96] found id: "24c96d8311b2eede51392a6988d59a67f1e627441610efe3a19a1e6caed81c77"
	I1227 20:58:21.144103  514124 cri.go:96] found id: "5355cd83edb132719f8b40ff6136f563ecb8fefdebeae2c0612a70e25533a2ff"
	I1227 20:58:21.144124  514124 cri.go:96] found id: "c0bd9fdc2ab5940221455fa985fb0081e3b368f0836df24c03263c5d6dcce82f"
	I1227 20:58:21.144147  514124 cri.go:96] found id: "7fdca341cdd33eaa6d5b453c79b202a239ceef8d792a7c82a51f302277a8b8ce"
	I1227 20:58:21.144181  514124 cri.go:96] found id: "0f6243dabab6f6acf95adfec716a9b5499ae2e36626a7d7c095f59e8c54137e7"
	I1227 20:58:21.144209  514124 cri.go:96] found id: "d216918be758a4a0fd5e7b8edf4f0ee84f75b9bdcd96a5fd40cbde5de645de91"
	I1227 20:58:21.144231  514124 cri.go:96] found id: ""
	I1227 20:58:21.144326  514124 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:58:21.610673  514124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:58:21.624351  514124 pause.go:52] kubelet running: false
	I1227 20:58:21.624411  514124 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:58:21.810323  514124 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:58:21.810440  514124 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:58:21.882961  514124 cri.go:96] found id: "24c96d8311b2eede51392a6988d59a67f1e627441610efe3a19a1e6caed81c77"
	I1227 20:58:21.882989  514124 cri.go:96] found id: "5355cd83edb132719f8b40ff6136f563ecb8fefdebeae2c0612a70e25533a2ff"
	I1227 20:58:21.882999  514124 cri.go:96] found id: "c0bd9fdc2ab5940221455fa985fb0081e3b368f0836df24c03263c5d6dcce82f"
	I1227 20:58:21.883004  514124 cri.go:96] found id: "7fdca341cdd33eaa6d5b453c79b202a239ceef8d792a7c82a51f302277a8b8ce"
	I1227 20:58:21.883007  514124 cri.go:96] found id: "0f6243dabab6f6acf95adfec716a9b5499ae2e36626a7d7c095f59e8c54137e7"
	I1227 20:58:21.883011  514124 cri.go:96] found id: "d216918be758a4a0fd5e7b8edf4f0ee84f75b9bdcd96a5fd40cbde5de645de91"
	I1227 20:58:21.883015  514124 cri.go:96] found id: ""
	I1227 20:58:21.883066  514124 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:58:21.904652  514124 out.go:203] 
	W1227 20:58:21.908570  514124 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:58:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:58:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 20:58:21.908652  514124 out.go:285] * 
	* 
	W1227 20:58:21.912379  514124 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:58:21.915581  514124 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-549946 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-549946
helpers_test.go:244: (dbg) docker inspect newest-cni-549946:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522",
	        "Created": "2025-12-27T20:57:32.376707101Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 511818,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:58:05.887615612Z",
	            "FinishedAt": "2025-12-27T20:58:05.047737794Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522/hostname",
	        "HostsPath": "/var/lib/docker/containers/33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522/hosts",
	        "LogPath": "/var/lib/docker/containers/33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522/33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522-json.log",
	        "Name": "/newest-cni-549946",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-549946:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-549946",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522",
	                "LowerDir": "/var/lib/docker/overlay2/982c2034b4244173858000f623c93c04ec27cc043c3dc430bac371ee9def442a-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/982c2034b4244173858000f623c93c04ec27cc043c3dc430bac371ee9def442a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/982c2034b4244173858000f623c93c04ec27cc043c3dc430bac371ee9def442a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/982c2034b4244173858000f623c93c04ec27cc043c3dc430bac371ee9def442a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-549946",
	                "Source": "/var/lib/docker/volumes/newest-cni-549946/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-549946",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-549946",
	                "name.minikube.sigs.k8s.io": "newest-cni-549946",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "16e7fecb9f4ebfa5bd50ffc99b03be2957b0cef7bd306da8442ca26a2af9c38d",
	            "SandboxKey": "/var/run/docker/netns/16e7fecb9f4e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-549946": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:8c:90:fb:9d:b6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b57edbc724b90e99751e5881513e82042c8927605a9433275af7712a02f70992",
	                    "EndpointID": "5331bb28f7c335eb435365cdfa4acfd04b837e4601947e5c3926192a7c140e2f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-549946",
	                        "33026e33441a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-549946 -n newest-cni-549946
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-549946 -n newest-cni-549946: exit status 2 (364.096615ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-549946 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-058924 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-058924                                                                                                                                                                                                               │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ delete  │ -p default-k8s-diff-port-058924                                                                                                                                                                                                               │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ start   │ -p embed-certs-193865 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-193865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │                     │
	│ stop    │ -p embed-certs-193865 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │ 27 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p embed-certs-193865 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │ 27 Dec 25 20:55 UTC │
	│ start   │ -p embed-certs-193865 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │ 27 Dec 25 20:56 UTC │
	│ image   │ embed-certs-193865 image list --format=json                                                                                                                                                                                                   │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:56 UTC │ 27 Dec 25 20:56 UTC │
	│ pause   │ -p embed-certs-193865 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:56 UTC │                     │
	│ delete  │ -p embed-certs-193865                                                                                                                                                                                                                         │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ delete  │ -p embed-certs-193865                                                                                                                                                                                                                         │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ delete  │ -p disable-driver-mounts-371621                                                                                                                                                                                                               │ disable-driver-mounts-371621 │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ start   │ -p no-preload-542467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-542467            │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:58 UTC │
	│ ssh     │ force-systemd-flag-604544 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-604544    │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ delete  │ -p force-systemd-flag-604544                                                                                                                                                                                                                  │ force-systemd-flag-604544    │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ start   │ -p newest-cni-549946 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-549946            │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:58 UTC │
	│ addons  │ enable metrics-server -p newest-cni-549946 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-549946            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	│ stop    │ -p newest-cni-549946 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-549946            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ addons  │ enable dashboard -p newest-cni-549946 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-549946            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ start   │ -p newest-cni-549946 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-549946            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ addons  │ enable metrics-server -p no-preload-542467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-542467            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	│ stop    │ -p no-preload-542467 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-542467            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	│ image   │ newest-cni-549946 image list --format=json                                                                                                                                                                                                    │ newest-cni-549946            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ pause   │ -p newest-cni-549946 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-549946            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:58:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:58:05.579669  511686 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:58:05.579898  511686 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:58:05.579928  511686 out.go:374] Setting ErrFile to fd 2...
	I1227 20:58:05.579952  511686 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:58:05.580412  511686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:58:05.581121  511686 out.go:368] Setting JSON to false
	I1227 20:58:05.582445  511686 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9638,"bootTime":1766859448,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:58:05.582520  511686 start.go:143] virtualization:  
	I1227 20:58:05.585842  511686 out.go:179] * [newest-cni-549946] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:58:05.589957  511686 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:58:05.590029  511686 notify.go:221] Checking for updates...
	I1227 20:58:05.596871  511686 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:58:05.599873  511686 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:58:05.602734  511686 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:58:05.605699  511686 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:58:05.608504  511686 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:58:05.611820  511686 config.go:182] Loaded profile config "newest-cni-549946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:58:05.612429  511686 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:58:05.639562  511686 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:58:05.639836  511686 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:58:05.710299  511686 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:58:05.701183266 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:58:05.710404  511686 docker.go:319] overlay module found
	I1227 20:58:05.713567  511686 out.go:179] * Using the docker driver based on existing profile
	I1227 20:58:05.716553  511686 start.go:309] selected driver: docker
	I1227 20:58:05.716573  511686 start.go:928] validating driver "docker" against &{Name:newest-cni-549946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-549946 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:58:05.716701  511686 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:58:05.717439  511686 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:58:05.789602  511686 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:58:05.778418689 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:58:05.789943  511686 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 20:58:05.789967  511686 cni.go:84] Creating CNI manager for ""
	I1227 20:58:05.790020  511686 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:58:05.790051  511686 start.go:353] cluster config:
	{Name:newest-cni-549946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-549946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:58:05.795273  511686 out.go:179] * Starting "newest-cni-549946" primary control-plane node in "newest-cni-549946" cluster
	I1227 20:58:05.798135  511686 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:58:05.801013  511686 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:58:05.803853  511686 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:58:05.803899  511686 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:58:05.803909  511686 cache.go:65] Caching tarball of preloaded images
	I1227 20:58:05.803989  511686 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:58:05.804005  511686 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:58:05.804125  511686 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/config.json ...
	I1227 20:58:05.804333  511686 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:58:05.826712  511686 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:58:05.826731  511686 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:58:05.826752  511686 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:58:05.826783  511686 start.go:360] acquireMachinesLock for newest-cni-549946: {Name:mk8b0ea7d2aaecab8531b3a335f669f52685ec48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:58:05.826839  511686 start.go:364] duration metric: took 34.124µs to acquireMachinesLock for "newest-cni-549946"
	I1227 20:58:05.826864  511686 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:58:05.826869  511686 fix.go:54] fixHost starting: 
	I1227 20:58:05.827164  511686 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:58:05.848340  511686 fix.go:112] recreateIfNeeded on newest-cni-549946: state=Stopped err=<nil>
	W1227 20:58:05.848368  511686 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:58:05.851717  511686 out.go:252] * Restarting existing docker container for "newest-cni-549946" ...
	I1227 20:58:05.851823  511686 cli_runner.go:164] Run: docker start newest-cni-549946
	I1227 20:58:06.186737  511686 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:58:06.221990  511686 kic.go:430] container "newest-cni-549946" state is running.
	I1227 20:58:06.222364  511686 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-549946
	I1227 20:58:06.255265  511686 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/config.json ...
	I1227 20:58:06.255490  511686 machine.go:94] provisionDockerMachine start ...
	I1227 20:58:06.255558  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:06.284882  511686 main.go:144] libmachine: Using SSH client type: native
	I1227 20:58:06.285200  511686 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 20:58:06.285208  511686 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:58:06.285790  511686 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56808->127.0.0.1:33448: read: connection reset by peer
	I1227 20:58:09.429036  511686 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-549946
	
	I1227 20:58:09.429104  511686 ubuntu.go:182] provisioning hostname "newest-cni-549946"
	I1227 20:58:09.429174  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:09.448270  511686 main.go:144] libmachine: Using SSH client type: native
	I1227 20:58:09.448616  511686 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 20:58:09.448627  511686 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-549946 && echo "newest-cni-549946" | sudo tee /etc/hostname
	I1227 20:58:09.599731  511686 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-549946
	
	I1227 20:58:09.599830  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:09.617017  511686 main.go:144] libmachine: Using SSH client type: native
	I1227 20:58:09.617346  511686 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 20:58:09.617365  511686 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-549946' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-549946/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-549946' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:58:09.753625  511686 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:58:09.753650  511686 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:58:09.753675  511686 ubuntu.go:190] setting up certificates
	I1227 20:58:09.753686  511686 provision.go:84] configureAuth start
	I1227 20:58:09.753743  511686 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-549946
	I1227 20:58:09.771756  511686 provision.go:143] copyHostCerts
	I1227 20:58:09.771827  511686 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:58:09.771846  511686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:58:09.771920  511686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:58:09.772025  511686 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:58:09.772034  511686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:58:09.772062  511686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:58:09.772123  511686 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:58:09.772135  511686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:58:09.772162  511686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:58:09.772213  511686 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.newest-cni-549946 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-549946]
	I1227 20:58:09.888037  511686 provision.go:177] copyRemoteCerts
	I1227 20:58:09.888101  511686 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:58:09.888139  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:09.906220  511686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:10.004983  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:58:10.031391  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:58:10.051651  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:58:10.072539  511686 provision.go:87] duration metric: took 318.829454ms to configureAuth
	I1227 20:58:10.072567  511686 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:58:10.072782  511686 config.go:182] Loaded profile config "newest-cni-549946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:58:10.072891  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:10.090743  511686 main.go:144] libmachine: Using SSH client type: native
	I1227 20:58:10.091067  511686 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 20:58:10.091085  511686 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:58:10.425631  511686 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:58:10.425654  511686 machine.go:97] duration metric: took 4.170154292s to provisionDockerMachine
	I1227 20:58:10.425666  511686 start.go:293] postStartSetup for "newest-cni-549946" (driver="docker")
	I1227 20:58:10.425677  511686 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:58:10.425751  511686 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:58:10.425804  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:10.443588  511686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:10.545357  511686 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:58:10.548638  511686 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:58:10.548665  511686 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:58:10.548677  511686 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:58:10.548732  511686 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:58:10.548824  511686 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:58:10.548929  511686 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:58:10.556727  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:58:10.574205  511686 start.go:296] duration metric: took 148.523271ms for postStartSetup
	I1227 20:58:10.574304  511686 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:58:10.574346  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:10.591055  511686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:10.686497  511686 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:58:10.691401  511686 fix.go:56] duration metric: took 4.864525019s for fixHost
	I1227 20:58:10.691428  511686 start.go:83] releasing machines lock for "newest-cni-549946", held for 4.864579443s
	I1227 20:58:10.691506  511686 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-549946
	I1227 20:58:10.709438  511686 ssh_runner.go:195] Run: cat /version.json
	I1227 20:58:10.709546  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:10.709651  511686 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:58:10.709728  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:10.737635  511686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:10.751289  511686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:10.938928  511686 ssh_runner.go:195] Run: systemctl --version
	I1227 20:58:10.945537  511686 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:58:10.981187  511686 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:58:10.986019  511686 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:58:10.986087  511686 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:58:10.993646  511686 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:58:10.993670  511686 start.go:496] detecting cgroup driver to use...
	I1227 20:58:10.993727  511686 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:58:10.993792  511686 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:58:11.009752  511686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:58:11.024113  511686 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:58:11.024178  511686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:58:11.040143  511686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:58:11.053938  511686 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:58:11.175025  511686 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:58:11.301907  511686 docker.go:234] disabling docker service ...
	I1227 20:58:11.301979  511686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:58:11.316307  511686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:58:11.328500  511686 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:58:11.451976  511686 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:58:11.572834  511686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:58:11.587070  511686 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:58:11.600577  511686 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:58:11.600670  511686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:11.609586  511686 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:58:11.609685  511686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:11.618475  511686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:11.627541  511686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:11.635766  511686 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:58:11.643217  511686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:11.651666  511686 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:11.661218  511686 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:11.670033  511686 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:58:11.677781  511686 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:58:11.686320  511686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:58:11.806390  511686 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:58:11.997876  511686 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:58:11.998033  511686 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:58:12.002126  511686 start.go:574] Will wait 60s for crictl version
	I1227 20:58:12.002242  511686 ssh_runner.go:195] Run: which crictl
	I1227 20:58:12.005830  511686 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:58:12.034649  511686 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:58:12.034748  511686 ssh_runner.go:195] Run: crio --version
	I1227 20:58:12.067553  511686 ssh_runner.go:195] Run: crio --version
	I1227 20:58:12.101209  511686 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:58:12.104143  511686 cli_runner.go:164] Run: docker network inspect newest-cni-549946 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:58:12.120402  511686 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 20:58:12.124331  511686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:58:12.136753  511686 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1227 20:58:12.139416  511686 kubeadm.go:884] updating cluster {Name:newest-cni-549946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-549946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:58:12.139565  511686 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:58:12.139643  511686 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:58:12.172957  511686 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:58:12.172983  511686 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:58:12.173035  511686 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:58:12.199879  511686 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:58:12.199902  511686 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:58:12.199911  511686 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1227 20:58:12.200001  511686 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-549946 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-549946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:58:12.200086  511686 ssh_runner.go:195] Run: crio config
	I1227 20:58:12.253231  511686 cni.go:84] Creating CNI manager for ""
	I1227 20:58:12.253254  511686 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:58:12.253275  511686 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1227 20:58:12.253330  511686 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-549946 NodeName:newest-cni-549946 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:58:12.253507  511686 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-549946"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:58:12.253667  511686 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:58:12.261200  511686 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:58:12.261287  511686 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:58:12.268254  511686 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 20:58:12.280255  511686 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:58:12.292764  511686 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1227 20:58:12.305027  511686 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:58:12.308536  511686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:58:12.318005  511686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:58:12.426126  511686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:58:12.441235  511686 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946 for IP: 192.168.85.2
	I1227 20:58:12.441257  511686 certs.go:195] generating shared ca certs ...
	I1227 20:58:12.441274  511686 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:12.441415  511686 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:58:12.441493  511686 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:58:12.441507  511686 certs.go:257] generating profile certs ...
	I1227 20:58:12.441591  511686 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/client.key
	I1227 20:58:12.441668  511686 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/apiserver.key.a445ad92
	I1227 20:58:12.441724  511686 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/proxy-client.key
	I1227 20:58:12.441843  511686 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:58:12.441878  511686 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:58:12.441891  511686 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:58:12.441924  511686 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:58:12.441950  511686 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:58:12.441978  511686 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:58:12.442040  511686 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:58:12.442610  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:58:12.475559  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:58:12.495239  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:58:12.514584  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:58:12.532594  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 20:58:12.549293  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 20:58:12.567471  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:58:12.599194  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 20:58:12.622823  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:58:12.647147  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:58:12.667751  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:58:12.686297  511686 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:58:12.701340  511686 ssh_runner.go:195] Run: openssl version
	I1227 20:58:12.709127  511686 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:58:12.717183  511686 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:58:12.724801  511686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:58:12.734947  511686 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:58:12.735009  511686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:58:12.780174  511686 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:58:12.792965  511686 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:58:12.800693  511686 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:58:12.808613  511686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:58:12.812373  511686 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:58:12.812477  511686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:58:12.853581  511686 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:58:12.860900  511686 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:58:12.868730  511686 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:58:12.876059  511686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:58:12.879872  511686 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:58:12.879951  511686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:58:12.929410  511686 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:58:12.936808  511686 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:58:12.940746  511686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:58:12.995512  511686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:58:13.066599  511686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:58:13.113964  511686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:58:13.221548  511686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:58:13.296541  511686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:58:13.512869  511686 kubeadm.go:401] StartCluster: {Name:newest-cni-549946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-549946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:58:13.512958  511686 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:58:13.513022  511686 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:58:13.656475  511686 cri.go:96] found id: "c0bd9fdc2ab5940221455fa985fb0081e3b368f0836df24c03263c5d6dcce82f"
	I1227 20:58:13.656502  511686 cri.go:96] found id: "7fdca341cdd33eaa6d5b453c79b202a239ceef8d792a7c82a51f302277a8b8ce"
	I1227 20:58:13.656508  511686 cri.go:96] found id: "0f6243dabab6f6acf95adfec716a9b5499ae2e36626a7d7c095f59e8c54137e7"
	I1227 20:58:13.656512  511686 cri.go:96] found id: "d216918be758a4a0fd5e7b8edf4f0ee84f75b9bdcd96a5fd40cbde5de645de91"
	I1227 20:58:13.656516  511686 cri.go:96] found id: ""
	I1227 20:58:13.656567  511686 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:58:13.710865  511686 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:58:13Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:58:13.710953  511686 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:58:13.724408  511686 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:58:13.724429  511686 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:58:13.724489  511686 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:58:13.732931  511686 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:58:13.733467  511686 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-549946" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:58:13.733706  511686 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-272475/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-549946" cluster setting kubeconfig missing "newest-cni-549946" context setting]
	I1227 20:58:13.734122  511686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:13.735809  511686 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:58:13.747032  511686 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1227 20:58:13.747066  511686 kubeadm.go:602] duration metric: took 22.631375ms to restartPrimaryControlPlane
	I1227 20:58:13.747076  511686 kubeadm.go:403] duration metric: took 234.218644ms to StartCluster
	I1227 20:58:13.747096  511686 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:13.747153  511686 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:58:13.747986  511686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:13.748195  511686 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:58:13.748574  511686 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:58:13.748653  511686 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-549946"
	I1227 20:58:13.748669  511686 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-549946"
	W1227 20:58:13.748679  511686 addons.go:248] addon storage-provisioner should already be in state true
	I1227 20:58:13.748702  511686 host.go:66] Checking if "newest-cni-549946" exists ...
	I1227 20:58:13.749246  511686 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:58:13.749900  511686 addons.go:70] Setting dashboard=true in profile "newest-cni-549946"
	I1227 20:58:13.749916  511686 addons.go:239] Setting addon dashboard=true in "newest-cni-549946"
	W1227 20:58:13.749923  511686 addons.go:248] addon dashboard should already be in state true
	I1227 20:58:13.749948  511686 host.go:66] Checking if "newest-cni-549946" exists ...
	I1227 20:58:13.750354  511686 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:58:13.750775  511686 config.go:182] Loaded profile config "newest-cni-549946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:58:13.750936  511686 addons.go:70] Setting default-storageclass=true in profile "newest-cni-549946"
	I1227 20:58:13.750958  511686 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-549946"
	I1227 20:58:13.756044  511686 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:58:13.757119  511686 out.go:179] * Verifying Kubernetes components...
	I1227 20:58:13.762979  511686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:58:13.833534  511686 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:58:13.833652  511686 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 20:58:13.836427  511686 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 20:58:13.838829  511686 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:58:13.838855  511686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:58:13.838916  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:13.839309  511686 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 20:58:13.839340  511686 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 20:58:13.839389  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:13.851926  511686 addons.go:239] Setting addon default-storageclass=true in "newest-cni-549946"
	W1227 20:58:13.851944  511686 addons.go:248] addon default-storageclass should already be in state true
	I1227 20:58:13.851967  511686 host.go:66] Checking if "newest-cni-549946" exists ...
	I1227 20:58:13.852369  511686 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:58:13.919929  511686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:13.922581  511686 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:58:13.922600  511686 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:58:13.922663  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:13.923030  511686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:13.954687  511686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:14.233378  511686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:58:14.257438  511686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:58:14.276911  511686 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:58:14.276977  511686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:58:14.290997  511686 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 20:58:14.291021  511686 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 20:58:14.342415  511686 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:58:14.342438  511686 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:58:14.405072  511686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:58:14.465124  511686 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:58:14.465146  511686 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:58:14.593893  511686 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:58:14.593919  511686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:58:14.678960  511686 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:58:14.678988  511686 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:58:14.737815  511686 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:58:14.737844  511686 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:58:14.758044  511686 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:58:14.758068  511686 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:58:14.786324  511686 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:58:14.786348  511686 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:58:14.826950  511686 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:58:14.826976  511686 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:58:14.854654  511686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:58:19.219540  511686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.962003026s)
	I1227 20:58:19.219606  511686 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.942620638s)
	I1227 20:58:19.219621  511686 api_server.go:72] duration metric: took 5.471396629s to wait for apiserver process to appear ...
	I1227 20:58:19.219628  511686 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:58:19.219646  511686 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 20:58:19.219965  511686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.814870091s)
	I1227 20:58:19.220261  511686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.365578111s)
	I1227 20:58:19.223802  511686 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-549946 addons enable metrics-server
	
	I1227 20:58:19.236559  511686 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 20:58:19.238943  511686 api_server.go:141] control plane version: v1.35.0
	I1227 20:58:19.238967  511686 api_server.go:131] duration metric: took 19.333471ms to wait for apiserver health ...
	I1227 20:58:19.238977  511686 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:58:19.244985  511686 system_pods.go:59] 8 kube-system pods found
	I1227 20:58:19.245018  511686 system_pods.go:61] "coredns-7d764666f9-lwqng" [fad8ca65-36d9-4617-8bc9-d4c9def1d5b5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 20:58:19.245029  511686 system_pods.go:61] "etcd-newest-cni-549946" [a5d0bbff-5553-4cc5-ab87-057dbf70fa61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:58:19.245037  511686 system_pods.go:61] "kindnet-x98wp" [344e609e-29a5-476e-9578-0ac5e389ff93] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:58:19.245049  511686 system_pods.go:61] "kube-apiserver-newest-cni-549946" [dd80588f-c85b-4a1b-a933-c2e2a987d7ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:58:19.245060  511686 system_pods.go:61] "kube-controller-manager-newest-cni-549946" [9cd183d8-c947-4b6e-a4cd-3603c51d4909] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:58:19.245071  511686 system_pods.go:61] "kube-proxy-j8h9m" [e72d123e-acc5-453f-b934-82214364e93d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:58:19.245085  511686 system_pods.go:61] "kube-scheduler-newest-cni-549946" [ac00b2bc-7be9-46a6-8025-45f27e7dfebc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:58:19.245091  511686 system_pods.go:61] "storage-provisioner" [7b4a0a3b-3bad-4818-8f46-2b25602b28c3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 20:58:19.245096  511686 system_pods.go:74] duration metric: took 6.114557ms to wait for pod list to return data ...
	I1227 20:58:19.245108  511686 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:58:19.246297  511686 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 20:58:19.249521  511686 addons.go:530] duration metric: took 5.500946101s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 20:58:19.250044  511686 default_sa.go:45] found service account: "default"
	I1227 20:58:19.250064  511686 default_sa.go:55] duration metric: took 4.950553ms for default service account to be created ...
	I1227 20:58:19.250075  511686 kubeadm.go:587] duration metric: took 5.501849552s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 20:58:19.250093  511686 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:58:19.253638  511686 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:58:19.253669  511686 node_conditions.go:123] node cpu capacity is 2
	I1227 20:58:19.253682  511686 node_conditions.go:105] duration metric: took 3.583279ms to run NodePressure ...
	I1227 20:58:19.253694  511686 start.go:242] waiting for startup goroutines ...
	I1227 20:58:19.253703  511686 start.go:247] waiting for cluster config update ...
	I1227 20:58:19.253717  511686 start.go:256] writing updated cluster config ...
	I1227 20:58:19.253975  511686 ssh_runner.go:195] Run: rm -f paused
	I1227 20:58:19.322382  511686 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 20:58:19.325398  511686 out.go:203] 
	W1227 20:58:19.328470  511686 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 20:58:19.331387  511686 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:58:19.334579  511686 out.go:179] * Done! kubectl is now configured to use "newest-cni-549946" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.874965219Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.881676987Z" level=info msg="Running pod sandbox: kube-system/kindnet-x98wp/POD" id=bedc72d2-597f-442f-86f1-230124cb8062 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.881739754Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.886375008Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=0a97bf1c-df0b-4bce-879e-7faecfe33111 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.88977174Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=bedc72d2-597f-442f-86f1-230124cb8062 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.896347463Z" level=info msg="Ran pod sandbox aa868978df6518694c31c527951773b0b93f2decfd4035b41fd27ba1c2f552f6 with infra container: kube-system/kube-proxy-j8h9m/POD" id=0a97bf1c-df0b-4bce-879e-7faecfe33111 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.89943071Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=44d25604-1a94-4630-969e-a0a00a4cebc5 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.900373724Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=4091dec6-1e96-4e83-a4e1-4a0a931ad597 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.903437707Z" level=info msg="Creating container: kube-system/kube-proxy-j8h9m/kube-proxy" id=ece6e4c0-35f3-4fd7-85c4-a7faeb96dee4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.903542976Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.903860005Z" level=info msg="Ran pod sandbox 2d2305d1aceeb3a4cd585b0ed380cd352521d84a9893d6790bdf2bb40d6d0030 with infra container: kube-system/kindnet-x98wp/POD" id=bedc72d2-597f-442f-86f1-230124cb8062 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.906572194Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=b2965885-2ff5-4788-a1c2-f9478969f677 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.9105147Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=55e8e085-3b29-4d74-8cf4-db2b74285844 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.91079461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.911567496Z" level=info msg="Creating container: kube-system/kindnet-x98wp/kindnet-cni" id=00d4f78b-327b-42e5-b24b-d728c87cdf96 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.911672797Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.912200701Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.938793363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.940343393Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.991438092Z" level=info msg="Created container 24c96d8311b2eede51392a6988d59a67f1e627441610efe3a19a1e6caed81c77: kube-system/kindnet-x98wp/kindnet-cni" id=00d4f78b-327b-42e5-b24b-d728c87cdf96 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.992191925Z" level=info msg="Starting container: 24c96d8311b2eede51392a6988d59a67f1e627441610efe3a19a1e6caed81c77" id=a14b9dc4-7a2c-4589-9cbd-d321dd8f398b name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.994187383Z" level=info msg="Started container" PID=1075 containerID=24c96d8311b2eede51392a6988d59a67f1e627441610efe3a19a1e6caed81c77 description=kube-system/kindnet-x98wp/kindnet-cni id=a14b9dc4-7a2c-4589-9cbd-d321dd8f398b name=/runtime.v1.RuntimeService/StartContainer sandboxID=2d2305d1aceeb3a4cd585b0ed380cd352521d84a9893d6790bdf2bb40d6d0030
	Dec 27 20:58:19 newest-cni-549946 crio[613]: time="2025-12-27T20:58:19.081903681Z" level=info msg="Created container 5355cd83edb132719f8b40ff6136f563ecb8fefdebeae2c0612a70e25533a2ff: kube-system/kube-proxy-j8h9m/kube-proxy" id=ece6e4c0-35f3-4fd7-85c4-a7faeb96dee4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:58:19 newest-cni-549946 crio[613]: time="2025-12-27T20:58:19.082718535Z" level=info msg="Starting container: 5355cd83edb132719f8b40ff6136f563ecb8fefdebeae2c0612a70e25533a2ff" id=503c7d6d-8725-425f-a4b0-4d85bc3b2644 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:58:19 newest-cni-549946 crio[613]: time="2025-12-27T20:58:19.087464638Z" level=info msg="Started container" PID=1074 containerID=5355cd83edb132719f8b40ff6136f563ecb8fefdebeae2c0612a70e25533a2ff description=kube-system/kube-proxy-j8h9m/kube-proxy id=503c7d6d-8725-425f-a4b0-4d85bc3b2644 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa868978df6518694c31c527951773b0b93f2decfd4035b41fd27ba1c2f552f6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	24c96d8311b2e       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   3 seconds ago       Running             kindnet-cni               1                   2d2305d1aceeb       kindnet-x98wp                               kube-system
	5355cd83edb13       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   3 seconds ago       Running             kube-proxy                1                   aa868978df651       kube-proxy-j8h9m                            kube-system
	c0bd9fdc2ab59       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   9 seconds ago       Running             etcd                      1                   915a72e82df3c       etcd-newest-cni-549946                      kube-system
	7fdca341cdd33       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   9 seconds ago       Running             kube-controller-manager   1                   326de8bbc03ae       kube-controller-manager-newest-cni-549946   kube-system
	0f6243dabab6f       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   9 seconds ago       Running             kube-apiserver            1                   9544ff076daaf       kube-apiserver-newest-cni-549946            kube-system
	d216918be758a       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   9 seconds ago       Running             kube-scheduler            1                   7998abb75e427       kube-scheduler-newest-cni-549946            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-549946
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-549946
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=newest-cni-549946
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_57_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:57:51 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-549946
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:58:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:58:18 +0000   Sat, 27 Dec 2025 20:57:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:58:18 +0000   Sat, 27 Dec 2025 20:57:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:58:18 +0000   Sat, 27 Dec 2025 20:57:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 27 Dec 2025 20:58:18 +0000   Sat, 27 Dec 2025 20:57:48 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-549946
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                6b7382ae-8399-40ff-bb99-a6dfaed9059c
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-549946                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         29s
	  kube-system                 kindnet-x98wp                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-newest-cni-549946             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-newest-cni-549946    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-j8h9m                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-newest-cni-549946             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  25s   node-controller  Node newest-cni-549946 event: Registered Node newest-cni-549946 in Controller
	  Normal  RegisteredNode  1s    node-controller  Node newest-cni-549946 event: Registered Node newest-cni-549946 in Controller
	
	
	==> dmesg <==
	[ +35.447549] overlayfs: idmapped layers are currently not supported
	[Dec27 20:26] overlayfs: idmapped layers are currently not supported
	[Dec27 20:27] overlayfs: idmapped layers are currently not supported
	[  +6.770645] overlayfs: idmapped layers are currently not supported
	[Dec27 20:28] overlayfs: idmapped layers are currently not supported
	[ +25.872751] overlayfs: idmapped layers are currently not supported
	[Dec27 20:29] overlayfs: idmapped layers are currently not supported
	[ +32.997137] overlayfs: idmapped layers are currently not supported
	[Dec27 20:31] overlayfs: idmapped layers are currently not supported
	[Dec27 20:33] overlayfs: idmapped layers are currently not supported
	[ +33.772475] overlayfs: idmapped layers are currently not supported
	[Dec27 20:39] overlayfs: idmapped layers are currently not supported
	[Dec27 20:40] overlayfs: idmapped layers are currently not supported
	[Dec27 20:44] overlayfs: idmapped layers are currently not supported
	[Dec27 20:45] overlayfs: idmapped layers are currently not supported
	[Dec27 20:49] overlayfs: idmapped layers are currently not supported
	[Dec27 20:50] overlayfs: idmapped layers are currently not supported
	[Dec27 20:51] overlayfs: idmapped layers are currently not supported
	[Dec27 20:52] overlayfs: idmapped layers are currently not supported
	[Dec27 20:53] overlayfs: idmapped layers are currently not supported
	[Dec27 20:55] overlayfs: idmapped layers are currently not supported
	[ +57.272039] overlayfs: idmapped layers are currently not supported
	[Dec27 20:57] overlayfs: idmapped layers are currently not supported
	[ +34.093681] overlayfs: idmapped layers are currently not supported
	[Dec27 20:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c0bd9fdc2ab5940221455fa985fb0081e3b368f0836df24c03263c5d6dcce82f] <==
	{"level":"info","ts":"2025-12-27T20:58:13.522567Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T20:58:13.522641Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T20:58:13.523109Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T20:58:13.523768Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T20:58:13.523794Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T20:58:13.523829Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T20:58:13.523836Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T20:58:13.593224Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:58:13.593286Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:58:13.593365Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T20:58:13.593379Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:58:13.593394Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:58:13.597639Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T20:58:13.597686Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:58:13.597708Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:58:13.597718Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T20:58:13.599588Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:newest-cni-549946 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:58:13.599713Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:58:13.599783Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:58:13.599946Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:58:13.600026Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:58:13.612216Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:58:13.631477Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:58:13.629570Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:58:13.671857Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 20:58:23 up  2:40,  0 user,  load average: 3.84, 2.19, 1.92
	Linux newest-cni-549946 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [24c96d8311b2eede51392a6988d59a67f1e627441610efe3a19a1e6caed81c77] <==
	I1227 20:58:19.155115       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:58:19.155307       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1227 20:58:19.155411       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:58:19.155423       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:58:19.155432       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:58:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:58:19.333873       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:58:19.333943       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:58:19.333981       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:58:19.334339       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [0f6243dabab6f6acf95adfec716a9b5499ae2e36626a7d7c095f59e8c54137e7] <==
	I1227 20:58:17.938507       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:17.939579       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:58:17.942358       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 20:58:17.943069       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 20:58:17.943191       1 aggregator.go:187] initial CRD sync complete...
	I1227 20:58:17.943206       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 20:58:17.943212       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:58:17.943217       1 cache.go:39] Caches are synced for autoregister controller
	E1227 20:58:17.945120       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 20:58:17.961530       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:17.961570       1 policy_source.go:248] refreshing policies
	I1227 20:58:17.971836       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:58:18.650818       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:58:18.723304       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:58:18.747044       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:58:18.789533       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:58:18.837072       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:58:18.852842       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:58:18.999284       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.181.211"}
	I1227 20:58:19.021651       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.86.198"}
	I1227 20:58:21.523075       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:58:21.570517       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:58:21.570518       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:58:21.676325       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:58:21.782741       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [7fdca341cdd33eaa6d5b453c79b202a239ceef8d792a7c82a51f302277a8b8ce] <==
	I1227 20:58:21.132780       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.132808       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.132870       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 20:58:21.132935       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-549946"
	I1227 20:58:21.132941       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.132974       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.132980       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 20:58:21.132997       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.133000       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.133126       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.133151       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.133297       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.133531       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.133623       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.137640       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.137715       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.137757       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.137783       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.140321       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.148994       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.149081       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:58:21.149120       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:58:21.149016       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.156219       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.191672       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [5355cd83edb132719f8b40ff6136f563ecb8fefdebeae2c0612a70e25533a2ff] <==
	I1227 20:58:19.271324       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:58:19.348653       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:58:19.449852       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:19.449886       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1227 20:58:19.449972       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:58:19.499772       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:58:19.499847       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:58:19.504524       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:58:19.504974       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:58:19.504994       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:58:19.509254       1 config.go:200] "Starting service config controller"
	I1227 20:58:19.509280       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:58:19.509298       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:58:19.509303       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:58:19.509319       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:58:19.509323       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:58:19.509992       1 config.go:309] "Starting node config controller"
	I1227 20:58:19.510011       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:58:19.510018       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:58:19.609978       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 20:58:19.610016       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:58:19.610051       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d216918be758a4a0fd5e7b8edf4f0ee84f75b9bdcd96a5fd40cbde5de645de91] <==
	I1227 20:58:16.013165       1 serving.go:386] Generated self-signed cert in-memory
	I1227 20:58:17.930107       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 20:58:17.930137       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:58:17.955765       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 20:58:17.955882       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1227 20:58:17.955894       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:58:17.955925       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 20:58:17.963301       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:58:17.963326       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:58:17.963349       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1227 20:58:17.963355       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:58:18.066992       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:18.067032       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:18.165511       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.076993     733 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.078478     733 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: E1227 20:58:18.083481     733 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-549946\" already exists" pod="kube-system/kube-apiserver-newest-cni-549946"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.083644     733 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-549946"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: E1227 20:58:18.180413     733 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-549946\" already exists" pod="kube-system/kube-controller-manager-newest-cni-549946"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.180457     733 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-549946"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: E1227 20:58:18.239305     733 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-549946\" already exists" pod="kube-system/kube-scheduler-newest-cni-549946"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.564095     733 apiserver.go:52] "Watching apiserver"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: E1227 20:58:18.570774     733 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-549946" containerName="kube-controller-manager"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: E1227 20:58:18.574179     733 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-549946" containerName="etcd"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: E1227 20:58:18.574372     733 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-549946" containerName="kube-scheduler"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: E1227 20:58:18.574653     733 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-549946" containerName="kube-apiserver"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.579952     733 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.643022     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/344e609e-29a5-476e-9578-0ac5e389ff93-cni-cfg\") pod \"kindnet-x98wp\" (UID: \"344e609e-29a5-476e-9578-0ac5e389ff93\") " pod="kube-system/kindnet-x98wp"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.643778     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/344e609e-29a5-476e-9578-0ac5e389ff93-lib-modules\") pod \"kindnet-x98wp\" (UID: \"344e609e-29a5-476e-9578-0ac5e389ff93\") " pod="kube-system/kindnet-x98wp"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.643941     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e72d123e-acc5-453f-b934-82214364e93d-xtables-lock\") pod \"kube-proxy-j8h9m\" (UID: \"e72d123e-acc5-453f-b934-82214364e93d\") " pod="kube-system/kube-proxy-j8h9m"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.644061     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e72d123e-acc5-453f-b934-82214364e93d-lib-modules\") pod \"kube-proxy-j8h9m\" (UID: \"e72d123e-acc5-453f-b934-82214364e93d\") " pod="kube-system/kube-proxy-j8h9m"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.644164     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/344e609e-29a5-476e-9578-0ac5e389ff93-xtables-lock\") pod \"kindnet-x98wp\" (UID: \"344e609e-29a5-476e-9578-0ac5e389ff93\") " pod="kube-system/kindnet-x98wp"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.673104     733 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: E1227 20:58:18.717281     733 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-549946" containerName="kube-controller-manager"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: W1227 20:58:18.897126     733 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522/crio-2d2305d1aceeb3a4cd585b0ed380cd352521d84a9893d6790bdf2bb40d6d0030 WatchSource:0}: Error finding container 2d2305d1aceeb3a4cd585b0ed380cd352521d84a9893d6790bdf2bb40d6d0030: Status 404 returned error can't find the container with id 2d2305d1aceeb3a4cd585b0ed380cd352521d84a9893d6790bdf2bb40d6d0030
	Dec 27 20:58:19 newest-cni-549946 kubelet[733]: E1227 20:58:19.025297     733 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-549946" containerName="kube-scheduler"
	Dec 27 20:58:20 newest-cni-549946 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:58:20 newest-cni-549946 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:58:20 newest-cni-549946 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-549946 -n newest-cni-549946
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-549946 -n newest-cni-549946: exit status 2 (333.116849ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-549946 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-lwqng storage-provisioner dashboard-metrics-scraper-867fb5f87b-bgfj9 kubernetes-dashboard-b84665fb8-xzsn4
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-549946 describe pod coredns-7d764666f9-lwqng storage-provisioner dashboard-metrics-scraper-867fb5f87b-bgfj9 kubernetes-dashboard-b84665fb8-xzsn4
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-549946 describe pod coredns-7d764666f9-lwqng storage-provisioner dashboard-metrics-scraper-867fb5f87b-bgfj9 kubernetes-dashboard-b84665fb8-xzsn4: exit status 1 (81.118793ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-lwqng" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-bgfj9" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-xzsn4" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-549946 describe pod coredns-7d764666f9-lwqng storage-provisioner dashboard-metrics-scraper-867fb5f87b-bgfj9 kubernetes-dashboard-b84665fb8-xzsn4: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-549946
helpers_test.go:244: (dbg) docker inspect newest-cni-549946:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522",
	        "Created": "2025-12-27T20:57:32.376707101Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 511818,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:58:05.887615612Z",
	            "FinishedAt": "2025-12-27T20:58:05.047737794Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522/hostname",
	        "HostsPath": "/var/lib/docker/containers/33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522/hosts",
	        "LogPath": "/var/lib/docker/containers/33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522/33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522-json.log",
	        "Name": "/newest-cni-549946",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-549946:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-549946",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522",
	                "LowerDir": "/var/lib/docker/overlay2/982c2034b4244173858000f623c93c04ec27cc043c3dc430bac371ee9def442a-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/982c2034b4244173858000f623c93c04ec27cc043c3dc430bac371ee9def442a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/982c2034b4244173858000f623c93c04ec27cc043c3dc430bac371ee9def442a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/982c2034b4244173858000f623c93c04ec27cc043c3dc430bac371ee9def442a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-549946",
	                "Source": "/var/lib/docker/volumes/newest-cni-549946/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-549946",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-549946",
	                "name.minikube.sigs.k8s.io": "newest-cni-549946",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "16e7fecb9f4ebfa5bd50ffc99b03be2957b0cef7bd306da8442ca26a2af9c38d",
	            "SandboxKey": "/var/run/docker/netns/16e7fecb9f4e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-549946": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:8c:90:fb:9d:b6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b57edbc724b90e99751e5881513e82042c8927605a9433275af7712a02f70992",
	                    "EndpointID": "5331bb28f7c335eb435365cdfa4acfd04b837e4601947e5c3926192a7c140e2f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-549946",
	                        "33026e33441a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-549946 -n newest-cni-549946
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-549946 -n newest-cni-549946: exit status 2 (333.058743ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-549946 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-549946 logs -n 25: (1.017451735s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-058924 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-058924                                                                                                                                                                                                               │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ delete  │ -p default-k8s-diff-port-058924                                                                                                                                                                                                               │ default-k8s-diff-port-058924 │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:54 UTC │
	│ start   │ -p embed-certs-193865 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:54 UTC │ 27 Dec 25 20:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-193865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │                     │
	│ stop    │ -p embed-certs-193865 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │ 27 Dec 25 20:55 UTC │
	│ addons  │ enable dashboard -p embed-certs-193865 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │ 27 Dec 25 20:55 UTC │
	│ start   │ -p embed-certs-193865 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:55 UTC │ 27 Dec 25 20:56 UTC │
	│ image   │ embed-certs-193865 image list --format=json                                                                                                                                                                                                   │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:56 UTC │ 27 Dec 25 20:56 UTC │
	│ pause   │ -p embed-certs-193865 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:56 UTC │                     │
	│ delete  │ -p embed-certs-193865                                                                                                                                                                                                                         │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ delete  │ -p embed-certs-193865                                                                                                                                                                                                                         │ embed-certs-193865           │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ delete  │ -p disable-driver-mounts-371621                                                                                                                                                                                                               │ disable-driver-mounts-371621 │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ start   │ -p no-preload-542467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-542467            │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:58 UTC │
	│ ssh     │ force-systemd-flag-604544 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-604544    │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ delete  │ -p force-systemd-flag-604544                                                                                                                                                                                                                  │ force-systemd-flag-604544    │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ start   │ -p newest-cni-549946 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-549946            │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:58 UTC │
	│ addons  │ enable metrics-server -p newest-cni-549946 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-549946            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	│ stop    │ -p newest-cni-549946 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-549946            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ addons  │ enable dashboard -p newest-cni-549946 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-549946            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ start   │ -p newest-cni-549946 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-549946            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ addons  │ enable metrics-server -p no-preload-542467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-542467            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	│ stop    │ -p no-preload-542467 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-542467            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	│ image   │ newest-cni-549946 image list --format=json                                                                                                                                                                                                    │ newest-cni-549946            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ pause   │ -p newest-cni-549946 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-549946            │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:58:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:58:05.579669  511686 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:58:05.579898  511686 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:58:05.579928  511686 out.go:374] Setting ErrFile to fd 2...
	I1227 20:58:05.579952  511686 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:58:05.580412  511686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:58:05.581121  511686 out.go:368] Setting JSON to false
	I1227 20:58:05.582445  511686 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9638,"bootTime":1766859448,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:58:05.582520  511686 start.go:143] virtualization:  
	I1227 20:58:05.585842  511686 out.go:179] * [newest-cni-549946] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:58:05.589957  511686 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:58:05.590029  511686 notify.go:221] Checking for updates...
	I1227 20:58:05.596871  511686 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:58:05.599873  511686 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:58:05.602734  511686 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:58:05.605699  511686 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:58:05.608504  511686 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:58:05.611820  511686 config.go:182] Loaded profile config "newest-cni-549946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:58:05.612429  511686 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:58:05.639562  511686 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:58:05.639836  511686 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:58:05.710299  511686 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:58:05.701183266 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:58:05.710404  511686 docker.go:319] overlay module found
	I1227 20:58:05.713567  511686 out.go:179] * Using the docker driver based on existing profile
	I1227 20:58:05.716553  511686 start.go:309] selected driver: docker
	I1227 20:58:05.716573  511686 start.go:928] validating driver "docker" against &{Name:newest-cni-549946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-549946 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:58:05.716701  511686 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:58:05.717439  511686 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:58:05.789602  511686 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:58:05.778418689 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:58:05.789943  511686 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 20:58:05.789967  511686 cni.go:84] Creating CNI manager for ""
	I1227 20:58:05.790020  511686 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:58:05.790051  511686 start.go:353] cluster config:
	{Name:newest-cni-549946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-549946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:58:05.795273  511686 out.go:179] * Starting "newest-cni-549946" primary control-plane node in "newest-cni-549946" cluster
	I1227 20:58:05.798135  511686 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:58:05.801013  511686 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:58:05.803853  511686 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:58:05.803899  511686 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:58:05.803909  511686 cache.go:65] Caching tarball of preloaded images
	I1227 20:58:05.803989  511686 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:58:05.804005  511686 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:58:05.804125  511686 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/config.json ...
	I1227 20:58:05.804333  511686 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:58:05.826712  511686 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:58:05.826731  511686 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:58:05.826752  511686 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:58:05.826783  511686 start.go:360] acquireMachinesLock for newest-cni-549946: {Name:mk8b0ea7d2aaecab8531b3a335f669f52685ec48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:58:05.826839  511686 start.go:364] duration metric: took 34.124µs to acquireMachinesLock for "newest-cni-549946"
	I1227 20:58:05.826864  511686 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:58:05.826869  511686 fix.go:54] fixHost starting: 
	I1227 20:58:05.827164  511686 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:58:05.848340  511686 fix.go:112] recreateIfNeeded on newest-cni-549946: state=Stopped err=<nil>
	W1227 20:58:05.848368  511686 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:58:05.851717  511686 out.go:252] * Restarting existing docker container for "newest-cni-549946" ...
	I1227 20:58:05.851823  511686 cli_runner.go:164] Run: docker start newest-cni-549946
	I1227 20:58:06.186737  511686 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:58:06.221990  511686 kic.go:430] container "newest-cni-549946" state is running.
	I1227 20:58:06.222364  511686 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-549946
	I1227 20:58:06.255265  511686 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/config.json ...
	I1227 20:58:06.255490  511686 machine.go:94] provisionDockerMachine start ...
	I1227 20:58:06.255558  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:06.284882  511686 main.go:144] libmachine: Using SSH client type: native
	I1227 20:58:06.285200  511686 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 20:58:06.285208  511686 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:58:06.285790  511686 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56808->127.0.0.1:33448: read: connection reset by peer
	I1227 20:58:09.429036  511686 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-549946
	
	I1227 20:58:09.429104  511686 ubuntu.go:182] provisioning hostname "newest-cni-549946"
	I1227 20:58:09.429174  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:09.448270  511686 main.go:144] libmachine: Using SSH client type: native
	I1227 20:58:09.448616  511686 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 20:58:09.448627  511686 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-549946 && echo "newest-cni-549946" | sudo tee /etc/hostname
	I1227 20:58:09.599731  511686 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-549946
	
	I1227 20:58:09.599830  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:09.617017  511686 main.go:144] libmachine: Using SSH client type: native
	I1227 20:58:09.617346  511686 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 20:58:09.617365  511686 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-549946' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-549946/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-549946' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:58:09.753625  511686 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:58:09.753650  511686 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:58:09.753675  511686 ubuntu.go:190] setting up certificates
	I1227 20:58:09.753686  511686 provision.go:84] configureAuth start
	I1227 20:58:09.753743  511686 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-549946
	I1227 20:58:09.771756  511686 provision.go:143] copyHostCerts
	I1227 20:58:09.771827  511686 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:58:09.771846  511686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:58:09.771920  511686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:58:09.772025  511686 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:58:09.772034  511686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:58:09.772062  511686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:58:09.772123  511686 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:58:09.772135  511686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:58:09.772162  511686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:58:09.772213  511686 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.newest-cni-549946 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-549946]
	I1227 20:58:09.888037  511686 provision.go:177] copyRemoteCerts
	I1227 20:58:09.888101  511686 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:58:09.888139  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:09.906220  511686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:10.004983  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:58:10.031391  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:58:10.051651  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:58:10.072539  511686 provision.go:87] duration metric: took 318.829454ms to configureAuth
	I1227 20:58:10.072567  511686 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:58:10.072782  511686 config.go:182] Loaded profile config "newest-cni-549946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:58:10.072891  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:10.090743  511686 main.go:144] libmachine: Using SSH client type: native
	I1227 20:58:10.091067  511686 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 20:58:10.091085  511686 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:58:10.425631  511686 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:58:10.425654  511686 machine.go:97] duration metric: took 4.170154292s to provisionDockerMachine
	I1227 20:58:10.425666  511686 start.go:293] postStartSetup for "newest-cni-549946" (driver="docker")
	I1227 20:58:10.425677  511686 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:58:10.425751  511686 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:58:10.425804  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:10.443588  511686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:10.545357  511686 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:58:10.548638  511686 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:58:10.548665  511686 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:58:10.548677  511686 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:58:10.548732  511686 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:58:10.548824  511686 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:58:10.548929  511686 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:58:10.556727  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:58:10.574205  511686 start.go:296] duration metric: took 148.523271ms for postStartSetup
	I1227 20:58:10.574304  511686 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:58:10.574346  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:10.591055  511686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:10.686497  511686 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:58:10.691401  511686 fix.go:56] duration metric: took 4.864525019s for fixHost
	I1227 20:58:10.691428  511686 start.go:83] releasing machines lock for "newest-cni-549946", held for 4.864579443s
	I1227 20:58:10.691506  511686 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-549946
	I1227 20:58:10.709438  511686 ssh_runner.go:195] Run: cat /version.json
	I1227 20:58:10.709546  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:10.709651  511686 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:58:10.709728  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:10.737635  511686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:10.751289  511686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:10.938928  511686 ssh_runner.go:195] Run: systemctl --version
	I1227 20:58:10.945537  511686 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:58:10.981187  511686 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:58:10.986019  511686 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:58:10.986087  511686 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:58:10.993646  511686 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:58:10.993670  511686 start.go:496] detecting cgroup driver to use...
	I1227 20:58:10.993727  511686 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:58:10.993792  511686 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:58:11.009752  511686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:58:11.024113  511686 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:58:11.024178  511686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:58:11.040143  511686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:58:11.053938  511686 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:58:11.175025  511686 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:58:11.301907  511686 docker.go:234] disabling docker service ...
	I1227 20:58:11.301979  511686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:58:11.316307  511686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:58:11.328500  511686 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:58:11.451976  511686 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:58:11.572834  511686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:58:11.587070  511686 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:58:11.600577  511686 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:58:11.600670  511686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:11.609586  511686 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:58:11.609685  511686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:11.618475  511686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:11.627541  511686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:11.635766  511686 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:58:11.643217  511686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:11.651666  511686 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:11.661218  511686 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:11.670033  511686 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:58:11.677781  511686 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:58:11.686320  511686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:58:11.806390  511686 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:58:11.997876  511686 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:58:11.998033  511686 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:58:12.002126  511686 start.go:574] Will wait 60s for crictl version
	I1227 20:58:12.002242  511686 ssh_runner.go:195] Run: which crictl
	I1227 20:58:12.005830  511686 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:58:12.034649  511686 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:58:12.034748  511686 ssh_runner.go:195] Run: crio --version
	I1227 20:58:12.067553  511686 ssh_runner.go:195] Run: crio --version
	I1227 20:58:12.101209  511686 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 20:58:12.104143  511686 cli_runner.go:164] Run: docker network inspect newest-cni-549946 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:58:12.120402  511686 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 20:58:12.124331  511686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:58:12.136753  511686 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1227 20:58:12.139416  511686 kubeadm.go:884] updating cluster {Name:newest-cni-549946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-549946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:58:12.139565  511686 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:58:12.139643  511686 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:58:12.172957  511686 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:58:12.172983  511686 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:58:12.173035  511686 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:58:12.199879  511686 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:58:12.199902  511686 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:58:12.199911  511686 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1227 20:58:12.200001  511686 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-549946 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-549946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:58:12.200086  511686 ssh_runner.go:195] Run: crio config
	I1227 20:58:12.253231  511686 cni.go:84] Creating CNI manager for ""
	I1227 20:58:12.253254  511686 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:58:12.253275  511686 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1227 20:58:12.253330  511686 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-549946 NodeName:newest-cni-549946 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:58:12.253507  511686 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-549946"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:58:12.253667  511686 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:58:12.261200  511686 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:58:12.261287  511686 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:58:12.268254  511686 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 20:58:12.280255  511686 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:58:12.292764  511686 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1227 20:58:12.305027  511686 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:58:12.308536  511686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:58:12.318005  511686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:58:12.426126  511686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:58:12.441235  511686 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946 for IP: 192.168.85.2
	I1227 20:58:12.441257  511686 certs.go:195] generating shared ca certs ...
	I1227 20:58:12.441274  511686 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:12.441415  511686 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:58:12.441493  511686 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:58:12.441507  511686 certs.go:257] generating profile certs ...
	I1227 20:58:12.441591  511686 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/client.key
	I1227 20:58:12.441668  511686 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/apiserver.key.a445ad92
	I1227 20:58:12.441724  511686 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/proxy-client.key
	I1227 20:58:12.441843  511686 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:58:12.441878  511686 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:58:12.441891  511686 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:58:12.441924  511686 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:58:12.441950  511686 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:58:12.441978  511686 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:58:12.442040  511686 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:58:12.442610  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:58:12.475559  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:58:12.495239  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:58:12.514584  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:58:12.532594  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 20:58:12.549293  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 20:58:12.567471  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:58:12.599194  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/newest-cni-549946/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 20:58:12.622823  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:58:12.647147  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:58:12.667751  511686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:58:12.686297  511686 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:58:12.701340  511686 ssh_runner.go:195] Run: openssl version
	I1227 20:58:12.709127  511686 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:58:12.717183  511686 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:58:12.724801  511686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:58:12.734947  511686 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:58:12.735009  511686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:58:12.780174  511686 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:58:12.792965  511686 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:58:12.800693  511686 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:58:12.808613  511686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:58:12.812373  511686 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:58:12.812477  511686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:58:12.853581  511686 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:58:12.860900  511686 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:58:12.868730  511686 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:58:12.876059  511686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:58:12.879872  511686 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:58:12.879951  511686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:58:12.929410  511686 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:58:12.936808  511686 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:58:12.940746  511686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:58:12.995512  511686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:58:13.066599  511686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:58:13.113964  511686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:58:13.221548  511686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:58:13.296541  511686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:58:13.512869  511686 kubeadm.go:401] StartCluster: {Name:newest-cni-549946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-549946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:58:13.512958  511686 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:58:13.513022  511686 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:58:13.656475  511686 cri.go:96] found id: "c0bd9fdc2ab5940221455fa985fb0081e3b368f0836df24c03263c5d6dcce82f"
	I1227 20:58:13.656502  511686 cri.go:96] found id: "7fdca341cdd33eaa6d5b453c79b202a239ceef8d792a7c82a51f302277a8b8ce"
	I1227 20:58:13.656508  511686 cri.go:96] found id: "0f6243dabab6f6acf95adfec716a9b5499ae2e36626a7d7c095f59e8c54137e7"
	I1227 20:58:13.656512  511686 cri.go:96] found id: "d216918be758a4a0fd5e7b8edf4f0ee84f75b9bdcd96a5fd40cbde5de645de91"
	I1227 20:58:13.656516  511686 cri.go:96] found id: ""
	I1227 20:58:13.656567  511686 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 20:58:13.710865  511686 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:58:13Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:58:13.710953  511686 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:58:13.724408  511686 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:58:13.724429  511686 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:58:13.724489  511686 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:58:13.732931  511686 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:58:13.733467  511686 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-549946" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:58:13.733706  511686 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-272475/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-549946" cluster setting kubeconfig missing "newest-cni-549946" context setting]
	I1227 20:58:13.734122  511686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:13.735809  511686 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:58:13.747032  511686 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1227 20:58:13.747066  511686 kubeadm.go:602] duration metric: took 22.631375ms to restartPrimaryControlPlane
	I1227 20:58:13.747076  511686 kubeadm.go:403] duration metric: took 234.218644ms to StartCluster
	I1227 20:58:13.747096  511686 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:13.747153  511686 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:58:13.747986  511686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:13.748195  511686 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:58:13.748574  511686 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:58:13.748653  511686 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-549946"
	I1227 20:58:13.748669  511686 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-549946"
	W1227 20:58:13.748679  511686 addons.go:248] addon storage-provisioner should already be in state true
	I1227 20:58:13.748702  511686 host.go:66] Checking if "newest-cni-549946" exists ...
	I1227 20:58:13.749246  511686 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:58:13.749900  511686 addons.go:70] Setting dashboard=true in profile "newest-cni-549946"
	I1227 20:58:13.749916  511686 addons.go:239] Setting addon dashboard=true in "newest-cni-549946"
	W1227 20:58:13.749923  511686 addons.go:248] addon dashboard should already be in state true
	I1227 20:58:13.749948  511686 host.go:66] Checking if "newest-cni-549946" exists ...
	I1227 20:58:13.750354  511686 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:58:13.750775  511686 config.go:182] Loaded profile config "newest-cni-549946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:58:13.750936  511686 addons.go:70] Setting default-storageclass=true in profile "newest-cni-549946"
	I1227 20:58:13.750958  511686 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-549946"
	I1227 20:58:13.756044  511686 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:58:13.757119  511686 out.go:179] * Verifying Kubernetes components...
	I1227 20:58:13.762979  511686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:58:13.833534  511686 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:58:13.833652  511686 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 20:58:13.836427  511686 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 20:58:13.838829  511686 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:58:13.838855  511686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:58:13.838916  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:13.839309  511686 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 20:58:13.839340  511686 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 20:58:13.839389  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:13.851926  511686 addons.go:239] Setting addon default-storageclass=true in "newest-cni-549946"
	W1227 20:58:13.851944  511686 addons.go:248] addon default-storageclass should already be in state true
	I1227 20:58:13.851967  511686 host.go:66] Checking if "newest-cni-549946" exists ...
	I1227 20:58:13.852369  511686 cli_runner.go:164] Run: docker container inspect newest-cni-549946 --format={{.State.Status}}
	I1227 20:58:13.919929  511686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:13.922581  511686 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:58:13.922600  511686 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:58:13.922663  511686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-549946
	I1227 20:58:13.923030  511686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:13.954687  511686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/newest-cni-549946/id_rsa Username:docker}
	I1227 20:58:14.233378  511686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:58:14.257438  511686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:58:14.276911  511686 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:58:14.276977  511686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:58:14.290997  511686 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 20:58:14.291021  511686 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 20:58:14.342415  511686 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:58:14.342438  511686 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:58:14.405072  511686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:58:14.465124  511686 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:58:14.465146  511686 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:58:14.593893  511686 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:58:14.593919  511686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:58:14.678960  511686 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:58:14.678988  511686 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:58:14.737815  511686 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:58:14.737844  511686 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:58:14.758044  511686 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:58:14.758068  511686 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:58:14.786324  511686 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:58:14.786348  511686 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:58:14.826950  511686 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:58:14.826976  511686 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:58:14.854654  511686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:58:19.219540  511686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.962003026s)
	I1227 20:58:19.219606  511686 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.942620638s)
	I1227 20:58:19.219621  511686 api_server.go:72] duration metric: took 5.471396629s to wait for apiserver process to appear ...
	I1227 20:58:19.219628  511686 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:58:19.219646  511686 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 20:58:19.219965  511686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.814870091s)
	I1227 20:58:19.220261  511686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.365578111s)
	I1227 20:58:19.223802  511686 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-549946 addons enable metrics-server
	
	I1227 20:58:19.236559  511686 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 20:58:19.238943  511686 api_server.go:141] control plane version: v1.35.0
	I1227 20:58:19.238967  511686 api_server.go:131] duration metric: took 19.333471ms to wait for apiserver health ...
	I1227 20:58:19.238977  511686 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:58:19.244985  511686 system_pods.go:59] 8 kube-system pods found
	I1227 20:58:19.245018  511686 system_pods.go:61] "coredns-7d764666f9-lwqng" [fad8ca65-36d9-4617-8bc9-d4c9def1d5b5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 20:58:19.245029  511686 system_pods.go:61] "etcd-newest-cni-549946" [a5d0bbff-5553-4cc5-ab87-057dbf70fa61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:58:19.245037  511686 system_pods.go:61] "kindnet-x98wp" [344e609e-29a5-476e-9578-0ac5e389ff93] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 20:58:19.245049  511686 system_pods.go:61] "kube-apiserver-newest-cni-549946" [dd80588f-c85b-4a1b-a933-c2e2a987d7ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:58:19.245060  511686 system_pods.go:61] "kube-controller-manager-newest-cni-549946" [9cd183d8-c947-4b6e-a4cd-3603c51d4909] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:58:19.245071  511686 system_pods.go:61] "kube-proxy-j8h9m" [e72d123e-acc5-453f-b934-82214364e93d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 20:58:19.245085  511686 system_pods.go:61] "kube-scheduler-newest-cni-549946" [ac00b2bc-7be9-46a6-8025-45f27e7dfebc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:58:19.245091  511686 system_pods.go:61] "storage-provisioner" [7b4a0a3b-3bad-4818-8f46-2b25602b28c3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 20:58:19.245096  511686 system_pods.go:74] duration metric: took 6.114557ms to wait for pod list to return data ...
	I1227 20:58:19.245108  511686 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:58:19.246297  511686 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 20:58:19.249521  511686 addons.go:530] duration metric: took 5.500946101s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 20:58:19.250044  511686 default_sa.go:45] found service account: "default"
	I1227 20:58:19.250064  511686 default_sa.go:55] duration metric: took 4.950553ms for default service account to be created ...
	I1227 20:58:19.250075  511686 kubeadm.go:587] duration metric: took 5.501849552s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 20:58:19.250093  511686 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:58:19.253638  511686 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:58:19.253669  511686 node_conditions.go:123] node cpu capacity is 2
	I1227 20:58:19.253682  511686 node_conditions.go:105] duration metric: took 3.583279ms to run NodePressure ...
	I1227 20:58:19.253694  511686 start.go:242] waiting for startup goroutines ...
	I1227 20:58:19.253703  511686 start.go:247] waiting for cluster config update ...
	I1227 20:58:19.253717  511686 start.go:256] writing updated cluster config ...
	I1227 20:58:19.253975  511686 ssh_runner.go:195] Run: rm -f paused
	I1227 20:58:19.322382  511686 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 20:58:19.325398  511686 out.go:203] 
	W1227 20:58:19.328470  511686 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 20:58:19.331387  511686 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:58:19.334579  511686 out.go:179] * Done! kubectl is now configured to use "newest-cni-549946" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.874965219Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.881676987Z" level=info msg="Running pod sandbox: kube-system/kindnet-x98wp/POD" id=bedc72d2-597f-442f-86f1-230124cb8062 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.881739754Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.886375008Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=0a97bf1c-df0b-4bce-879e-7faecfe33111 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.88977174Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=bedc72d2-597f-442f-86f1-230124cb8062 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.896347463Z" level=info msg="Ran pod sandbox aa868978df6518694c31c527951773b0b93f2decfd4035b41fd27ba1c2f552f6 with infra container: kube-system/kube-proxy-j8h9m/POD" id=0a97bf1c-df0b-4bce-879e-7faecfe33111 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.89943071Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=44d25604-1a94-4630-969e-a0a00a4cebc5 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.900373724Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=4091dec6-1e96-4e83-a4e1-4a0a931ad597 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.903437707Z" level=info msg="Creating container: kube-system/kube-proxy-j8h9m/kube-proxy" id=ece6e4c0-35f3-4fd7-85c4-a7faeb96dee4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.903542976Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.903860005Z" level=info msg="Ran pod sandbox 2d2305d1aceeb3a4cd585b0ed380cd352521d84a9893d6790bdf2bb40d6d0030 with infra container: kube-system/kindnet-x98wp/POD" id=bedc72d2-597f-442f-86f1-230124cb8062 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.906572194Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=b2965885-2ff5-4788-a1c2-f9478969f677 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.9105147Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=55e8e085-3b29-4d74-8cf4-db2b74285844 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.91079461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.911567496Z" level=info msg="Creating container: kube-system/kindnet-x98wp/kindnet-cni" id=00d4f78b-327b-42e5-b24b-d728c87cdf96 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.911672797Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.912200701Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.938793363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.940343393Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.991438092Z" level=info msg="Created container 24c96d8311b2eede51392a6988d59a67f1e627441610efe3a19a1e6caed81c77: kube-system/kindnet-x98wp/kindnet-cni" id=00d4f78b-327b-42e5-b24b-d728c87cdf96 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.992191925Z" level=info msg="Starting container: 24c96d8311b2eede51392a6988d59a67f1e627441610efe3a19a1e6caed81c77" id=a14b9dc4-7a2c-4589-9cbd-d321dd8f398b name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:58:18 newest-cni-549946 crio[613]: time="2025-12-27T20:58:18.994187383Z" level=info msg="Started container" PID=1075 containerID=24c96d8311b2eede51392a6988d59a67f1e627441610efe3a19a1e6caed81c77 description=kube-system/kindnet-x98wp/kindnet-cni id=a14b9dc4-7a2c-4589-9cbd-d321dd8f398b name=/runtime.v1.RuntimeService/StartContainer sandboxID=2d2305d1aceeb3a4cd585b0ed380cd352521d84a9893d6790bdf2bb40d6d0030
	Dec 27 20:58:19 newest-cni-549946 crio[613]: time="2025-12-27T20:58:19.081903681Z" level=info msg="Created container 5355cd83edb132719f8b40ff6136f563ecb8fefdebeae2c0612a70e25533a2ff: kube-system/kube-proxy-j8h9m/kube-proxy" id=ece6e4c0-35f3-4fd7-85c4-a7faeb96dee4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:58:19 newest-cni-549946 crio[613]: time="2025-12-27T20:58:19.082718535Z" level=info msg="Starting container: 5355cd83edb132719f8b40ff6136f563ecb8fefdebeae2c0612a70e25533a2ff" id=503c7d6d-8725-425f-a4b0-4d85bc3b2644 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:58:19 newest-cni-549946 crio[613]: time="2025-12-27T20:58:19.087464638Z" level=info msg="Started container" PID=1074 containerID=5355cd83edb132719f8b40ff6136f563ecb8fefdebeae2c0612a70e25533a2ff description=kube-system/kube-proxy-j8h9m/kube-proxy id=503c7d6d-8725-425f-a4b0-4d85bc3b2644 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa868978df6518694c31c527951773b0b93f2decfd4035b41fd27ba1c2f552f6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	24c96d8311b2e       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   5 seconds ago       Running             kindnet-cni               1                   2d2305d1aceeb       kindnet-x98wp                               kube-system
	5355cd83edb13       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   5 seconds ago       Running             kube-proxy                1                   aa868978df651       kube-proxy-j8h9m                            kube-system
	c0bd9fdc2ab59       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   11 seconds ago      Running             etcd                      1                   915a72e82df3c       etcd-newest-cni-549946                      kube-system
	7fdca341cdd33       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   11 seconds ago      Running             kube-controller-manager   1                   326de8bbc03ae       kube-controller-manager-newest-cni-549946   kube-system
	0f6243dabab6f       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   11 seconds ago      Running             kube-apiserver            1                   9544ff076daaf       kube-apiserver-newest-cni-549946            kube-system
	d216918be758a       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   11 seconds ago      Running             kube-scheduler            1                   7998abb75e427       kube-scheduler-newest-cni-549946            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-549946
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-549946
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=newest-cni-549946
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_57_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:57:51 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-549946
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:58:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:58:18 +0000   Sat, 27 Dec 2025 20:57:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:58:18 +0000   Sat, 27 Dec 2025 20:57:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:58:18 +0000   Sat, 27 Dec 2025 20:57:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 27 Dec 2025 20:58:18 +0000   Sat, 27 Dec 2025 20:57:48 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-549946
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                6b7382ae-8399-40ff-bb99-a6dfaed9059c
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-549946                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         31s
	  kube-system                 kindnet-x98wp                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-newest-cni-549946             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-549946    200m (10%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-j8h9m                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-newest-cni-549946             100m (5%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node newest-cni-549946 event: Registered Node newest-cni-549946 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-549946 event: Registered Node newest-cni-549946 in Controller
	
	
	==> dmesg <==
	[ +35.447549] overlayfs: idmapped layers are currently not supported
	[Dec27 20:26] overlayfs: idmapped layers are currently not supported
	[Dec27 20:27] overlayfs: idmapped layers are currently not supported
	[  +6.770645] overlayfs: idmapped layers are currently not supported
	[Dec27 20:28] overlayfs: idmapped layers are currently not supported
	[ +25.872751] overlayfs: idmapped layers are currently not supported
	[Dec27 20:29] overlayfs: idmapped layers are currently not supported
	[ +32.997137] overlayfs: idmapped layers are currently not supported
	[Dec27 20:31] overlayfs: idmapped layers are currently not supported
	[Dec27 20:33] overlayfs: idmapped layers are currently not supported
	[ +33.772475] overlayfs: idmapped layers are currently not supported
	[Dec27 20:39] overlayfs: idmapped layers are currently not supported
	[Dec27 20:40] overlayfs: idmapped layers are currently not supported
	[Dec27 20:44] overlayfs: idmapped layers are currently not supported
	[Dec27 20:45] overlayfs: idmapped layers are currently not supported
	[Dec27 20:49] overlayfs: idmapped layers are currently not supported
	[Dec27 20:50] overlayfs: idmapped layers are currently not supported
	[Dec27 20:51] overlayfs: idmapped layers are currently not supported
	[Dec27 20:52] overlayfs: idmapped layers are currently not supported
	[Dec27 20:53] overlayfs: idmapped layers are currently not supported
	[Dec27 20:55] overlayfs: idmapped layers are currently not supported
	[ +57.272039] overlayfs: idmapped layers are currently not supported
	[Dec27 20:57] overlayfs: idmapped layers are currently not supported
	[ +34.093681] overlayfs: idmapped layers are currently not supported
	[Dec27 20:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c0bd9fdc2ab5940221455fa985fb0081e3b368f0836df24c03263c5d6dcce82f] <==
	{"level":"info","ts":"2025-12-27T20:58:13.522567Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T20:58:13.522641Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T20:58:13.523109Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T20:58:13.523768Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T20:58:13.523794Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T20:58:13.523829Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T20:58:13.523836Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T20:58:13.593224Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:58:13.593286Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:58:13.593365Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T20:58:13.593379Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:58:13.593394Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:58:13.597639Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T20:58:13.597686Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:58:13.597708Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:58:13.597718Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T20:58:13.599588Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:newest-cni-549946 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:58:13.599713Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:58:13.599783Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:58:13.599946Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:58:13.600026Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:58:13.612216Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:58:13.631477Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T20:58:13.629570Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:58:13.671857Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 20:58:24 up  2:40,  0 user,  load average: 3.84, 2.19, 1.92
	Linux newest-cni-549946 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [24c96d8311b2eede51392a6988d59a67f1e627441610efe3a19a1e6caed81c77] <==
	I1227 20:58:19.155115       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:58:19.155307       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1227 20:58:19.155411       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:58:19.155423       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:58:19.155432       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:58:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:58:19.333873       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:58:19.333943       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:58:19.333981       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:58:19.334339       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [0f6243dabab6f6acf95adfec716a9b5499ae2e36626a7d7c095f59e8c54137e7] <==
	I1227 20:58:17.938507       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:17.939579       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 20:58:17.942358       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 20:58:17.943069       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 20:58:17.943191       1 aggregator.go:187] initial CRD sync complete...
	I1227 20:58:17.943206       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 20:58:17.943212       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:58:17.943217       1 cache.go:39] Caches are synced for autoregister controller
	E1227 20:58:17.945120       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 20:58:17.961530       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:17.961570       1 policy_source.go:248] refreshing policies
	I1227 20:58:17.971836       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:58:18.650818       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:58:18.723304       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:58:18.747044       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:58:18.789533       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:58:18.837072       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:58:18.852842       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:58:18.999284       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.181.211"}
	I1227 20:58:19.021651       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.86.198"}
	I1227 20:58:21.523075       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:58:21.570517       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:58:21.570518       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 20:58:21.676325       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:58:21.782741       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [7fdca341cdd33eaa6d5b453c79b202a239ceef8d792a7c82a51f302277a8b8ce] <==
	I1227 20:58:21.132780       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.132808       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.132870       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 20:58:21.132935       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-549946"
	I1227 20:58:21.132941       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.132974       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.132980       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 20:58:21.132997       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.133000       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.133126       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.133151       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.133297       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.133531       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.133623       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.137640       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.137715       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.137757       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.137783       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.140321       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.148994       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.149081       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:58:21.149120       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 20:58:21.149016       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.156219       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:21.191672       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [5355cd83edb132719f8b40ff6136f563ecb8fefdebeae2c0612a70e25533a2ff] <==
	I1227 20:58:19.271324       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:58:19.348653       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:58:19.449852       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:19.449886       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1227 20:58:19.449972       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:58:19.499772       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:58:19.499847       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:58:19.504524       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:58:19.504974       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:58:19.504994       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:58:19.509254       1 config.go:200] "Starting service config controller"
	I1227 20:58:19.509280       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:58:19.509298       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:58:19.509303       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:58:19.509319       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:58:19.509323       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:58:19.509992       1 config.go:309] "Starting node config controller"
	I1227 20:58:19.510011       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:58:19.510018       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:58:19.609978       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 20:58:19.610016       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:58:19.610051       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d216918be758a4a0fd5e7b8edf4f0ee84f75b9bdcd96a5fd40cbde5de645de91] <==
	I1227 20:58:16.013165       1 serving.go:386] Generated self-signed cert in-memory
	I1227 20:58:17.930107       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 20:58:17.930137       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:58:17.955765       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 20:58:17.955882       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1227 20:58:17.955894       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:58:17.955925       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 20:58:17.963301       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:58:17.963326       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:58:17.963349       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1227 20:58:17.963355       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:58:18.066992       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:18.067032       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:18.165511       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.076993     733 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.078478     733 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: E1227 20:58:18.083481     733 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-549946\" already exists" pod="kube-system/kube-apiserver-newest-cni-549946"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.083644     733 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-549946"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: E1227 20:58:18.180413     733 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-549946\" already exists" pod="kube-system/kube-controller-manager-newest-cni-549946"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.180457     733 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-549946"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: E1227 20:58:18.239305     733 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-549946\" already exists" pod="kube-system/kube-scheduler-newest-cni-549946"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.564095     733 apiserver.go:52] "Watching apiserver"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: E1227 20:58:18.570774     733 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-549946" containerName="kube-controller-manager"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: E1227 20:58:18.574179     733 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-549946" containerName="etcd"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: E1227 20:58:18.574372     733 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-549946" containerName="kube-scheduler"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: E1227 20:58:18.574653     733 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-549946" containerName="kube-apiserver"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.579952     733 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.643022     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/344e609e-29a5-476e-9578-0ac5e389ff93-cni-cfg\") pod \"kindnet-x98wp\" (UID: \"344e609e-29a5-476e-9578-0ac5e389ff93\") " pod="kube-system/kindnet-x98wp"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.643778     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/344e609e-29a5-476e-9578-0ac5e389ff93-lib-modules\") pod \"kindnet-x98wp\" (UID: \"344e609e-29a5-476e-9578-0ac5e389ff93\") " pod="kube-system/kindnet-x98wp"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.643941     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e72d123e-acc5-453f-b934-82214364e93d-xtables-lock\") pod \"kube-proxy-j8h9m\" (UID: \"e72d123e-acc5-453f-b934-82214364e93d\") " pod="kube-system/kube-proxy-j8h9m"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.644061     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e72d123e-acc5-453f-b934-82214364e93d-lib-modules\") pod \"kube-proxy-j8h9m\" (UID: \"e72d123e-acc5-453f-b934-82214364e93d\") " pod="kube-system/kube-proxy-j8h9m"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.644164     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/344e609e-29a5-476e-9578-0ac5e389ff93-xtables-lock\") pod \"kindnet-x98wp\" (UID: \"344e609e-29a5-476e-9578-0ac5e389ff93\") " pod="kube-system/kindnet-x98wp"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: I1227 20:58:18.673104     733 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: E1227 20:58:18.717281     733 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-549946" containerName="kube-controller-manager"
	Dec 27 20:58:18 newest-cni-549946 kubelet[733]: W1227 20:58:18.897126     733 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/33026e33441a3f96ec992d0bc78455daa35943f22f16bf93834cd28639575522/crio-2d2305d1aceeb3a4cd585b0ed380cd352521d84a9893d6790bdf2bb40d6d0030 WatchSource:0}: Error finding container 2d2305d1aceeb3a4cd585b0ed380cd352521d84a9893d6790bdf2bb40d6d0030: Status 404 returned error can't find the container with id 2d2305d1aceeb3a4cd585b0ed380cd352521d84a9893d6790bdf2bb40d6d0030
	Dec 27 20:58:19 newest-cni-549946 kubelet[733]: E1227 20:58:19.025297     733 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-549946" containerName="kube-scheduler"
	Dec 27 20:58:20 newest-cni-549946 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:58:20 newest-cni-549946 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:58:20 newest-cni-549946 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-549946 -n newest-cni-549946
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-549946 -n newest-cni-549946: exit status 2 (343.749144ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-549946 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-lwqng storage-provisioner dashboard-metrics-scraper-867fb5f87b-bgfj9 kubernetes-dashboard-b84665fb8-xzsn4
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-549946 describe pod coredns-7d764666f9-lwqng storage-provisioner dashboard-metrics-scraper-867fb5f87b-bgfj9 kubernetes-dashboard-b84665fb8-xzsn4
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-549946 describe pod coredns-7d764666f9-lwqng storage-provisioner dashboard-metrics-scraper-867fb5f87b-bgfj9 kubernetes-dashboard-b84665fb8-xzsn4: exit status 1 (83.378359ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-lwqng" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-bgfj9" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-xzsn4" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-549946 describe pod coredns-7d764666f9-lwqng storage-provisioner dashboard-metrics-scraper-867fb5f87b-bgfj9 kubernetes-dashboard-b84665fb8-xzsn4: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-542467 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-542467 --alsologtostderr -v=1: exit status 80 (1.949387642s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-542467 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:59:33.053567  520951 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:59:33.054860  520951 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:59:33.056178  520951 out.go:374] Setting ErrFile to fd 2...
	I1227 20:59:33.056190  520951 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:59:33.056500  520951 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:59:33.056774  520951 out.go:368] Setting JSON to false
	I1227 20:59:33.056803  520951 mustload.go:66] Loading cluster: no-preload-542467
	I1227 20:59:33.057188  520951 config.go:182] Loaded profile config "no-preload-542467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:59:33.057674  520951 cli_runner.go:164] Run: docker container inspect no-preload-542467 --format={{.State.Status}}
	I1227 20:59:33.077414  520951 host.go:66] Checking if "no-preload-542467" exists ...
	I1227 20:59:33.077787  520951 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:59:33.142538  520951 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-27 20:59:33.132776905 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:59:33.143243  520951 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22332/minikube-v1.37.0-1766811082-22332-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766811082-22332/minikube-v1.37.0-1766811082-22332-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766811082-22332-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:no-preload-542467 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 20:59:33.146627  520951 out.go:179] * Pausing node no-preload-542467 ... 
	I1227 20:59:33.150443  520951 host.go:66] Checking if "no-preload-542467" exists ...
	I1227 20:59:33.150780  520951 ssh_runner.go:195] Run: systemctl --version
	I1227 20:59:33.150828  520951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-542467
	I1227 20:59:33.173655  520951 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/no-preload-542467/id_rsa Username:docker}
	I1227 20:59:33.273107  520951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:59:33.288566  520951 pause.go:52] kubelet running: true
	I1227 20:59:33.288654  520951 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:59:33.544729  520951 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:59:33.544814  520951 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:59:33.612187  520951 cri.go:96] found id: "e00bd10efcab4d9ccd6d7493ae80baa4f6b32616652432abfbc287b063e25f59"
	I1227 20:59:33.612258  520951 cri.go:96] found id: "496bbc1fc440e887ec74fe08c1f48d36953507a7dc1d003f4353ff7944432c2d"
	I1227 20:59:33.612291  520951 cri.go:96] found id: "484a2b95e52a778cc7edd0ec75e04dd23bf1af7cd989cadb7f6465f524fdddc5"
	I1227 20:59:33.612315  520951 cri.go:96] found id: "be5c6226a604721581fbe9641759c616cf40b5597d34625ad5321668ad3f5a6f"
	I1227 20:59:33.612333  520951 cri.go:96] found id: "bc0280cd97160303d86d9f47745f988149d592e81902cb53c092add4f5fb263b"
	I1227 20:59:33.612369  520951 cri.go:96] found id: "66e8f829d9c3d364d238135636c405f3c6255104b333cb219ec600934ec6abd0"
	I1227 20:59:33.612390  520951 cri.go:96] found id: "c19a656202deee3c031169a10d16ce0309d87ad8c5c40f4fe78c299c16484dfb"
	I1227 20:59:33.612408  520951 cri.go:96] found id: "161b43e94648c1d5a060e54751a6efa923997153605bc1c7b6e51c556ac8e5bf"
	I1227 20:59:33.612424  520951 cri.go:96] found id: "87c12835ffb381b6fd21e4708c09054907f3433d8bb1508c5f509e1d6dfef79b"
	I1227 20:59:33.612464  520951 cri.go:96] found id: "8042f28b87a726dade2ba4ff6db74c840e2d9ebdd3cd33f55037fcc0e835344e"
	I1227 20:59:33.612486  520951 cri.go:96] found id: "0b5f1122d2bae474f14bb22c11d66a7c0b17063ca9cd0fabf5651aa48608c872"
	I1227 20:59:33.612506  520951 cri.go:96] found id: ""
	I1227 20:59:33.612593  520951 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:59:33.633520  520951 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:59:33Z" level=error msg="open /run/runc: no such file or directory"
	I1227 20:59:33.901024  520951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:59:33.914009  520951 pause.go:52] kubelet running: false
	I1227 20:59:33.914141  520951 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:59:34.080985  520951 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:59:34.081070  520951 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:59:34.149525  520951 cri.go:96] found id: "e00bd10efcab4d9ccd6d7493ae80baa4f6b32616652432abfbc287b063e25f59"
	I1227 20:59:34.149549  520951 cri.go:96] found id: "496bbc1fc440e887ec74fe08c1f48d36953507a7dc1d003f4353ff7944432c2d"
	I1227 20:59:34.149554  520951 cri.go:96] found id: "484a2b95e52a778cc7edd0ec75e04dd23bf1af7cd989cadb7f6465f524fdddc5"
	I1227 20:59:34.149558  520951 cri.go:96] found id: "be5c6226a604721581fbe9641759c616cf40b5597d34625ad5321668ad3f5a6f"
	I1227 20:59:34.149561  520951 cri.go:96] found id: "bc0280cd97160303d86d9f47745f988149d592e81902cb53c092add4f5fb263b"
	I1227 20:59:34.149565  520951 cri.go:96] found id: "66e8f829d9c3d364d238135636c405f3c6255104b333cb219ec600934ec6abd0"
	I1227 20:59:34.149568  520951 cri.go:96] found id: "c19a656202deee3c031169a10d16ce0309d87ad8c5c40f4fe78c299c16484dfb"
	I1227 20:59:34.149571  520951 cri.go:96] found id: "161b43e94648c1d5a060e54751a6efa923997153605bc1c7b6e51c556ac8e5bf"
	I1227 20:59:34.149573  520951 cri.go:96] found id: "87c12835ffb381b6fd21e4708c09054907f3433d8bb1508c5f509e1d6dfef79b"
	I1227 20:59:34.149580  520951 cri.go:96] found id: "8042f28b87a726dade2ba4ff6db74c840e2d9ebdd3cd33f55037fcc0e835344e"
	I1227 20:59:34.149583  520951 cri.go:96] found id: "0b5f1122d2bae474f14bb22c11d66a7c0b17063ca9cd0fabf5651aa48608c872"
	I1227 20:59:34.149586  520951 cri.go:96] found id: ""
	I1227 20:59:34.149633  520951 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:59:34.655820  520951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:59:34.668392  520951 pause.go:52] kubelet running: false
	I1227 20:59:34.668472  520951 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 20:59:34.836284  520951 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 20:59:34.836370  520951 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 20:59:34.901328  520951 cri.go:96] found id: "e00bd10efcab4d9ccd6d7493ae80baa4f6b32616652432abfbc287b063e25f59"
	I1227 20:59:34.901393  520951 cri.go:96] found id: "496bbc1fc440e887ec74fe08c1f48d36953507a7dc1d003f4353ff7944432c2d"
	I1227 20:59:34.901413  520951 cri.go:96] found id: "484a2b95e52a778cc7edd0ec75e04dd23bf1af7cd989cadb7f6465f524fdddc5"
	I1227 20:59:34.901433  520951 cri.go:96] found id: "be5c6226a604721581fbe9641759c616cf40b5597d34625ad5321668ad3f5a6f"
	I1227 20:59:34.901478  520951 cri.go:96] found id: "bc0280cd97160303d86d9f47745f988149d592e81902cb53c092add4f5fb263b"
	I1227 20:59:34.901503  520951 cri.go:96] found id: "66e8f829d9c3d364d238135636c405f3c6255104b333cb219ec600934ec6abd0"
	I1227 20:59:34.901514  520951 cri.go:96] found id: "c19a656202deee3c031169a10d16ce0309d87ad8c5c40f4fe78c299c16484dfb"
	I1227 20:59:34.901517  520951 cri.go:96] found id: "161b43e94648c1d5a060e54751a6efa923997153605bc1c7b6e51c556ac8e5bf"
	I1227 20:59:34.901521  520951 cri.go:96] found id: "87c12835ffb381b6fd21e4708c09054907f3433d8bb1508c5f509e1d6dfef79b"
	I1227 20:59:34.901527  520951 cri.go:96] found id: "8042f28b87a726dade2ba4ff6db74c840e2d9ebdd3cd33f55037fcc0e835344e"
	I1227 20:59:34.901530  520951 cri.go:96] found id: "0b5f1122d2bae474f14bb22c11d66a7c0b17063ca9cd0fabf5651aa48608c872"
	I1227 20:59:34.901533  520951 cri.go:96] found id: ""
	I1227 20:59:34.901581  520951 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 20:59:34.918988  520951 out.go:203] 
	W1227 20:59:34.921765  520951 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:59:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T20:59:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 20:59:34.921792  520951 out.go:285] * 
	* 
	W1227 20:59:34.925566  520951 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:59:34.929416  520951 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-542467 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-542467
helpers_test.go:244: (dbg) docker inspect no-preload-542467:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379",
	        "Created": "2025-12-27T20:57:05.049440772Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 515777,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:58:29.721544865Z",
	            "FinishedAt": "2025-12-27T20:58:28.642739322Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379/hosts",
	        "LogPath": "/var/lib/docker/containers/dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379/dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379-json.log",
	        "Name": "/no-preload-542467",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-542467:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-542467",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379",
	                "LowerDir": "/var/lib/docker/overlay2/d95d5d7678527e5e563d3be9d484c3b526882d1f0fcfc24ff6bfe885fd5f4f8f-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d95d5d7678527e5e563d3be9d484c3b526882d1f0fcfc24ff6bfe885fd5f4f8f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d95d5d7678527e5e563d3be9d484c3b526882d1f0fcfc24ff6bfe885fd5f4f8f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d95d5d7678527e5e563d3be9d484c3b526882d1f0fcfc24ff6bfe885fd5f4f8f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-542467",
	                "Source": "/var/lib/docker/volumes/no-preload-542467/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-542467",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-542467",
	                "name.minikube.sigs.k8s.io": "no-preload-542467",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d51b7576ee3e2c454a39a423ea03d9b8e54acef384828538ad39e69d035b99bc",
	            "SandboxKey": "/var/run/docker/netns/d51b7576ee3e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-542467": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:8e:51:df:01:51",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c1ebbbafc12790a4f974a3988a1224bfe471b8982037ddfef20526083d80bfe8",
	                    "EndpointID": "1be02c98d1d0727f438100e0f6de5dee51ba3c4fa16e5d7433e75d9addcef82a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-542467",
	                        "dd7872488d6d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-542467 -n no-preload-542467
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-542467 -n no-preload-542467: exit status 2 (350.942544ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-542467 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-542467 logs -n 25: (1.332787702s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ ssh     │ force-systemd-flag-604544 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-604544         │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ delete  │ -p force-systemd-flag-604544                                                                                                                                                                                                                  │ force-systemd-flag-604544         │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ start   │ -p newest-cni-549946 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-549946                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:58 UTC │
	│ addons  │ enable metrics-server -p newest-cni-549946 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-549946                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	│ stop    │ -p newest-cni-549946 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-549946                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ addons  │ enable dashboard -p newest-cni-549946 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-549946                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ start   │ -p newest-cni-549946 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-549946                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ addons  │ enable metrics-server -p no-preload-542467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-542467                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	│ stop    │ -p no-preload-542467 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-542467                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ image   │ newest-cni-549946 image list --format=json                                                                                                                                                                                                    │ newest-cni-549946                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ pause   │ -p newest-cni-549946 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-549946                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	│ delete  │ -p newest-cni-549946                                                                                                                                                                                                                          │ newest-cni-549946                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ delete  │ -p newest-cni-549946                                                                                                                                                                                                                          │ newest-cni-549946                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ start   │ -p test-preload-dl-gcs-038558 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-038558        │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-542467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-542467                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ start   │ -p no-preload-542467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-542467                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:59 UTC │
	│ delete  │ -p test-preload-dl-gcs-038558                                                                                                                                                                                                                 │ test-preload-dl-gcs-038558        │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ start   │ -p test-preload-dl-github-371459 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-371459     │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	│ delete  │ -p test-preload-dl-github-371459                                                                                                                                                                                                              │ test-preload-dl-github-371459     │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-876907 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-876907 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-876907                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-876907 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ start   │ -p auto-037975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-037975                       │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:59 UTC │
	│ ssh     │ -p auto-037975 pgrep -a kubelet                                                                                                                                                                                                               │ auto-037975                       │ jenkins │ v1.37.0 │ 27 Dec 25 20:59 UTC │ 27 Dec 25 20:59 UTC │
	│ image   │ no-preload-542467 image list --format=json                                                                                                                                                                                                    │ no-preload-542467                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:59 UTC │ 27 Dec 25 20:59 UTC │
	│ pause   │ -p no-preload-542467 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-542467                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:58:41
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:58:41.138140  517451 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:58:41.138248  517451 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:58:41.138286  517451 out.go:374] Setting ErrFile to fd 2...
	I1227 20:58:41.138293  517451 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:58:41.138542  517451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:58:41.138959  517451 out.go:368] Setting JSON to false
	I1227 20:58:41.139799  517451 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9674,"bootTime":1766859448,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:58:41.139860  517451 start.go:143] virtualization:  
	I1227 20:58:41.144577  517451 out.go:179] * [auto-037975] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:58:41.147604  517451 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:58:41.147675  517451 notify.go:221] Checking for updates...
	I1227 20:58:41.156376  517451 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:58:41.159362  517451 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:58:41.162292  517451 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:58:41.165061  517451 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:58:41.167938  517451 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:58:41.171371  517451 config.go:182] Loaded profile config "no-preload-542467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:58:41.171502  517451 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:58:41.207457  517451 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:58:41.207563  517451 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:58:41.299035  517451 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 20:58:41.287721234 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:58:41.299141  517451 docker.go:319] overlay module found
	I1227 20:58:41.305529  517451 out.go:179] * Using the docker driver based on user configuration
	I1227 20:58:41.308396  517451 start.go:309] selected driver: docker
	I1227 20:58:41.308416  517451 start.go:928] validating driver "docker" against <nil>
	I1227 20:58:41.308429  517451 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:58:41.309126  517451 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:58:41.379427  517451 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 20:58:41.370451524 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:58:41.379564  517451 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 20:58:41.379780  517451 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:58:41.382745  517451 out.go:179] * Using Docker driver with root privileges
	I1227 20:58:41.385673  517451 cni.go:84] Creating CNI manager for ""
	I1227 20:58:41.385739  517451 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:58:41.385753  517451 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 20:58:41.385835  517451 start.go:353] cluster config:
	{Name:auto-037975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-037975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s Rosetta:false}
	I1227 20:58:41.389027  517451 out.go:179] * Starting "auto-037975" primary control-plane node in "auto-037975" cluster
	I1227 20:58:41.391898  517451 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:58:41.394782  517451 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:58:41.397614  517451 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:58:41.397657  517451 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:58:41.397667  517451 cache.go:65] Caching tarball of preloaded images
	I1227 20:58:41.397683  517451 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:58:41.397747  517451 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:58:41.397756  517451 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:58:41.397874  517451 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/config.json ...
	I1227 20:58:41.397891  517451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/config.json: {Name:mk80aae67ee487fd1e849ea2310bba72ea3a5bd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:41.418370  517451 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:58:41.418450  517451 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:58:41.418504  517451 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:58:41.418537  517451 start.go:360] acquireMachinesLock for auto-037975: {Name:mkbb5944f1db4111ae7674aa61f644093ca0cc2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:58:41.418756  517451 start.go:364] duration metric: took 139.205µs to acquireMachinesLock for "auto-037975"
	I1227 20:58:41.418791  517451 start.go:93] Provisioning new machine with config: &{Name:auto-037975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-037975 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:58:41.418855  517451 start.go:125] createHost starting for "" (driver="docker")
	I1227 20:58:39.387256  515650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:58:39.411081  515650 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:58:39.411154  515650 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:58:39.461458  515650 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:58:39.461479  515650 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:58:39.498178  515650 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:58:39.498197  515650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:58:39.536864  515650 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:58:39.536884  515650 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:58:39.587226  515650 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:58:39.587304  515650 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:58:39.604959  515650 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:58:39.605040  515650 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:58:39.622977  515650 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:58:39.623053  515650 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:58:39.643283  515650 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:58:39.643354  515650 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:58:39.668538  515650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:58:43.252867  515650 node_ready.go:49] node "no-preload-542467" is "Ready"
	I1227 20:58:43.252898  515650 node_ready.go:38] duration metric: took 3.91011649s for node "no-preload-542467" to be "Ready" ...
	I1227 20:58:43.252912  515650 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:58:43.252970  515650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:58:46.045556  515650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.7997778s)
	I1227 20:58:46.045621  515650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.658297836s)
	I1227 20:58:46.098294  515650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.429663254s)
	I1227 20:58:46.098452  515650 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.845464571s)
	I1227 20:58:46.098478  515650 api_server.go:72] duration metric: took 7.26882894s to wait for apiserver process to appear ...
	I1227 20:58:46.098505  515650 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:58:46.098535  515650 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:58:46.108534  515650 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 20:58:46.110285  515650 api_server.go:141] control plane version: v1.35.0
	I1227 20:58:46.110316  515650 api_server.go:131] duration metric: took 11.803151ms to wait for apiserver health ...
	I1227 20:58:46.110325  515650 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:58:46.115185  515650 system_pods.go:59] 8 kube-system pods found
	I1227 20:58:46.115257  515650 system_pods.go:61] "coredns-7d764666f9-p7xs9" [b5728b9d-d5dd-4946-971a-543ccae4bbb5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:58:46.115277  515650 system_pods.go:61] "etcd-no-preload-542467" [b2fc9fb5-3a79-4162-afa1-f4132b555027] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:58:46.115289  515650 system_pods.go:61] "kindnet-2v4p8" [9c2c77c3-7d5e-45f4-8eea-f6928cf134f5] Running
	I1227 20:58:46.115300  515650 system_pods.go:61] "kube-apiserver-no-preload-542467" [fb1f5660-5f3a-42ef-b2e5-4ba7758fcf27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:58:46.115319  515650 system_pods.go:61] "kube-controller-manager-no-preload-542467" [0e3c13f8-db03-45b4-a52e-b77e3414cdbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:58:46.115326  515650 system_pods.go:61] "kube-proxy-7mx96" [8d494c52-2b6e-431e-a66c-7f1e3f28a070] Running
	I1227 20:58:46.115338  515650 system_pods.go:61] "kube-scheduler-no-preload-542467" [576aa9a8-b4fa-4e56-a9a5-b438a60e0e18] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:58:46.115342  515650 system_pods.go:61] "storage-provisioner" [20b095bb-fb60-4860-ae08-c05d950bd9ea] Running
	I1227 20:58:46.115350  515650 system_pods.go:74] duration metric: took 5.018383ms to wait for pod list to return data ...
	I1227 20:58:46.115362  515650 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:58:46.122654  515650 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-542467 addons enable metrics-server
	
	I1227 20:58:41.422243  517451 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:58:41.422464  517451 start.go:159] libmachine.API.Create for "auto-037975" (driver="docker")
	I1227 20:58:41.422492  517451 client.go:173] LocalClient.Create starting
	I1227 20:58:41.422549  517451 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem
	I1227 20:58:41.422592  517451 main.go:144] libmachine: Decoding PEM data...
	I1227 20:58:41.422608  517451 main.go:144] libmachine: Parsing certificate...
	I1227 20:58:41.422664  517451 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem
	I1227 20:58:41.422682  517451 main.go:144] libmachine: Decoding PEM data...
	I1227 20:58:41.422693  517451 main.go:144] libmachine: Parsing certificate...
	I1227 20:58:41.423329  517451 cli_runner.go:164] Run: docker network inspect auto-037975 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:58:41.451891  517451 cli_runner.go:211] docker network inspect auto-037975 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:58:41.451969  517451 network_create.go:284] running [docker network inspect auto-037975] to gather additional debugging logs...
	I1227 20:58:41.451985  517451 cli_runner.go:164] Run: docker network inspect auto-037975
	W1227 20:58:41.471282  517451 cli_runner.go:211] docker network inspect auto-037975 returned with exit code 1
	I1227 20:58:41.471318  517451 network_create.go:287] error running [docker network inspect auto-037975]: docker network inspect auto-037975: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-037975 not found
	I1227 20:58:41.471331  517451 network_create.go:289] output of [docker network inspect auto-037975]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-037975 not found
	
	** /stderr **
	I1227 20:58:41.471435  517451 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:58:41.489211  517451 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9521cb9225c5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:1d:ef:38:b7:a6} reservation:<nil>}
	I1227 20:58:41.489627  517451 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-68d11cc2ab47 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:8d:ad:37:cb:fe} reservation:<nil>}
	I1227 20:58:41.489856  517451 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d3b7cfff4895 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:4a:e3:08:10:2f} reservation:<nil>}
	I1227 20:58:41.490132  517451 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c1ebbbafc127 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7a:a5:c1:28:f3:5c} reservation:<nil>}
	I1227 20:58:41.490539  517451 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d5410}
	I1227 20:58:41.490556  517451 network_create.go:124] attempt to create docker network auto-037975 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 20:58:41.490620  517451 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-037975 auto-037975
	I1227 20:58:41.556691  517451 network_create.go:108] docker network auto-037975 192.168.85.0/24 created
	I1227 20:58:41.556719  517451 kic.go:121] calculated static IP "192.168.85.2" for the "auto-037975" container
	I1227 20:58:41.556802  517451 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:58:41.577425  517451 cli_runner.go:164] Run: docker volume create auto-037975 --label name.minikube.sigs.k8s.io=auto-037975 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:58:41.600392  517451 oci.go:103] Successfully created a docker volume auto-037975
	I1227 20:58:41.600488  517451 cli_runner.go:164] Run: docker run --rm --name auto-037975-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-037975 --entrypoint /usr/bin/test -v auto-037975:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:58:42.572664  517451 oci.go:107] Successfully prepared a docker volume auto-037975
	I1227 20:58:42.572726  517451 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:58:42.572736  517451 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 20:58:42.572799  517451 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-037975:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 20:58:46.123031  515650 default_sa.go:45] found service account: "default"
	I1227 20:58:46.123055  515650 default_sa.go:55] duration metric: took 7.687283ms for default service account to be created ...
	I1227 20:58:46.123066  515650 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:58:46.168986  515650 system_pods.go:86] 8 kube-system pods found
	I1227 20:58:46.169015  515650 system_pods.go:89] "coredns-7d764666f9-p7xs9" [b5728b9d-d5dd-4946-971a-543ccae4bbb5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:58:46.169027  515650 system_pods.go:89] "etcd-no-preload-542467" [b2fc9fb5-3a79-4162-afa1-f4132b555027] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:58:46.169034  515650 system_pods.go:89] "kindnet-2v4p8" [9c2c77c3-7d5e-45f4-8eea-f6928cf134f5] Running
	I1227 20:58:46.169041  515650 system_pods.go:89] "kube-apiserver-no-preload-542467" [fb1f5660-5f3a-42ef-b2e5-4ba7758fcf27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:58:46.169048  515650 system_pods.go:89] "kube-controller-manager-no-preload-542467" [0e3c13f8-db03-45b4-a52e-b77e3414cdbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:58:46.169053  515650 system_pods.go:89] "kube-proxy-7mx96" [8d494c52-2b6e-431e-a66c-7f1e3f28a070] Running
	I1227 20:58:46.169060  515650 system_pods.go:89] "kube-scheduler-no-preload-542467" [576aa9a8-b4fa-4e56-a9a5-b438a60e0e18] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:58:46.169064  515650 system_pods.go:89] "storage-provisioner" [20b095bb-fb60-4860-ae08-c05d950bd9ea] Running
	I1227 20:58:46.169072  515650 system_pods.go:126] duration metric: took 45.999612ms to wait for k8s-apps to be running ...
	I1227 20:58:46.169079  515650 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:58:46.169131  515650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:58:46.188213  515650 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1227 20:58:46.210760  515650 system_svc.go:56] duration metric: took 41.671057ms WaitForService to wait for kubelet
	I1227 20:58:46.215833  515650 kubeadm.go:587] duration metric: took 7.386173102s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:58:46.215872  515650 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:58:46.251280  515650 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:58:46.251362  515650 node_conditions.go:123] node cpu capacity is 2
	I1227 20:58:46.251390  515650 node_conditions.go:105] duration metric: took 35.511232ms to run NodePressure ...
	I1227 20:58:46.251434  515650 start.go:242] waiting for startup goroutines ...
	I1227 20:58:46.252710  515650 addons.go:530] duration metric: took 7.422589541s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1227 20:58:46.252745  515650 start.go:247] waiting for cluster config update ...
	I1227 20:58:46.252759  515650 start.go:256] writing updated cluster config ...
	I1227 20:58:46.254988  515650 ssh_runner.go:195] Run: rm -f paused
	I1227 20:58:46.259271  515650 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:58:46.262783  515650 pod_ready.go:83] waiting for pod "coredns-7d764666f9-p7xs9" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 20:58:48.272377  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	I1227 20:58:46.632186  517451 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-037975:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.059343902s)
	I1227 20:58:46.632223  517451 kic.go:203] duration metric: took 4.059483458s to extract preloaded images to volume ...
	W1227 20:58:46.632364  517451 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 20:58:46.632473  517451 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:58:46.688533  517451 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-037975 --name auto-037975 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-037975 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-037975 --network auto-037975 --ip 192.168.85.2 --volume auto-037975:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:58:47.002848  517451 cli_runner.go:164] Run: docker container inspect auto-037975 --format={{.State.Running}}
	I1227 20:58:47.025952  517451 cli_runner.go:164] Run: docker container inspect auto-037975 --format={{.State.Status}}
	I1227 20:58:47.044308  517451 cli_runner.go:164] Run: docker exec auto-037975 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:58:47.113163  517451 oci.go:144] the created container "auto-037975" has a running status.
	I1227 20:58:47.113190  517451 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/auto-037975/id_rsa...
	I1227 20:58:47.277926  517451 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-272475/.minikube/machines/auto-037975/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:58:47.311228  517451 cli_runner.go:164] Run: docker container inspect auto-037975 --format={{.State.Status}}
	I1227 20:58:47.339220  517451 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:58:47.339239  517451 kic_runner.go:114] Args: [docker exec --privileged auto-037975 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:58:47.391901  517451 cli_runner.go:164] Run: docker container inspect auto-037975 --format={{.State.Status}}
	I1227 20:58:47.418389  517451 machine.go:94] provisionDockerMachine start ...
	I1227 20:58:47.418489  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:58:47.445773  517451 main.go:144] libmachine: Using SSH client type: native
	I1227 20:58:47.446232  517451 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1227 20:58:47.446246  517451 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:58:47.447836  517451 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46590->127.0.0.1:33458: read: connection reset by peer
	I1227 20:58:50.622986  517451 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-037975
	
	I1227 20:58:50.623069  517451 ubuntu.go:182] provisioning hostname "auto-037975"
	I1227 20:58:50.623170  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:58:50.653129  517451 main.go:144] libmachine: Using SSH client type: native
	I1227 20:58:50.653432  517451 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1227 20:58:50.653528  517451 main.go:144] libmachine: About to run SSH command:
	sudo hostname auto-037975 && echo "auto-037975" | sudo tee /etc/hostname
	I1227 20:58:50.822167  517451 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-037975
	
	I1227 20:58:50.822319  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:58:50.848011  517451 main.go:144] libmachine: Using SSH client type: native
	I1227 20:58:50.848437  517451 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1227 20:58:50.848462  517451 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-037975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-037975/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-037975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:58:51.003876  517451 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:58:51.003909  517451 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:58:51.003967  517451 ubuntu.go:190] setting up certificates
	I1227 20:58:51.003976  517451 provision.go:84] configureAuth start
	I1227 20:58:51.004039  517451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-037975
	I1227 20:58:51.031722  517451 provision.go:143] copyHostCerts
	I1227 20:58:51.031792  517451 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:58:51.031806  517451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:58:51.031908  517451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:58:51.032029  517451 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:58:51.032041  517451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:58:51.032074  517451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:58:51.032173  517451 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:58:51.032203  517451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:58:51.032239  517451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:58:51.032318  517451 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.auto-037975 san=[127.0.0.1 192.168.85.2 auto-037975 localhost minikube]
	I1227 20:58:51.276594  517451 provision.go:177] copyRemoteCerts
	I1227 20:58:51.276659  517451 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:58:51.276706  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:58:51.300817  517451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/auto-037975/id_rsa Username:docker}
	I1227 20:58:51.406718  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:58:51.427931  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1227 20:58:51.448503  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:58:51.468309  517451 provision.go:87] duration metric: took 464.310953ms to configureAuth
	I1227 20:58:51.468387  517451 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:58:51.468615  517451 config.go:182] Loaded profile config "auto-037975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:58:51.468785  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:58:51.488499  517451 main.go:144] libmachine: Using SSH client type: native
	I1227 20:58:51.488810  517451 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1227 20:58:51.488824  517451 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:58:51.973288  517451 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:58:51.973312  517451 machine.go:97] duration metric: took 4.554903221s to provisionDockerMachine
	I1227 20:58:51.973323  517451 client.go:176] duration metric: took 10.550825068s to LocalClient.Create
	I1227 20:58:51.973337  517451 start.go:167] duration metric: took 10.550873887s to libmachine.API.Create "auto-037975"
	I1227 20:58:51.973344  517451 start.go:293] postStartSetup for "auto-037975" (driver="docker")
	I1227 20:58:51.973355  517451 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:58:51.973416  517451 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:58:51.973507  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:58:51.993267  517451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/auto-037975/id_rsa Username:docker}
	I1227 20:58:52.108155  517451 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:58:52.111864  517451 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:58:52.111889  517451 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:58:52.111900  517451 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:58:52.111952  517451 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:58:52.112025  517451 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:58:52.112127  517451 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:58:52.120638  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:58:52.144037  517451 start.go:296] duration metric: took 170.677888ms for postStartSetup
	I1227 20:58:52.144395  517451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-037975
	I1227 20:58:52.169518  517451 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/config.json ...
	I1227 20:58:52.169787  517451 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:58:52.169826  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:58:52.193294  517451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/auto-037975/id_rsa Username:docker}
	I1227 20:58:52.299528  517451 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:58:52.304288  517451 start.go:128] duration metric: took 10.885418254s to createHost
	I1227 20:58:52.304310  517451 start.go:83] releasing machines lock for "auto-037975", held for 10.885540713s
	I1227 20:58:52.304375  517451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-037975
	I1227 20:58:52.327429  517451 ssh_runner.go:195] Run: cat /version.json
	I1227 20:58:52.327481  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:58:52.327706  517451 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:58:52.327760  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:58:52.361819  517451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/auto-037975/id_rsa Username:docker}
	I1227 20:58:52.362163  517451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/auto-037975/id_rsa Username:docker}
	I1227 20:58:52.595615  517451 ssh_runner.go:195] Run: systemctl --version
	I1227 20:58:52.603346  517451 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:58:52.659135  517451 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:58:52.664305  517451 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:58:52.664387  517451 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:58:52.697078  517451 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 20:58:52.697105  517451 start.go:496] detecting cgroup driver to use...
	I1227 20:58:52.697137  517451 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:58:52.697187  517451 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:58:52.717701  517451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:58:52.730434  517451 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:58:52.730557  517451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:58:52.748783  517451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:58:52.769316  517451 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:58:52.896765  517451 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:58:53.069985  517451 docker.go:234] disabling docker service ...
	I1227 20:58:53.070052  517451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:58:53.096159  517451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:58:53.110730  517451 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:58:53.314278  517451 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:58:53.493565  517451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:58:53.508496  517451 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:58:53.523940  517451 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:58:53.524027  517451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:53.533106  517451 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:58:53.533197  517451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:53.542304  517451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:53.551035  517451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:53.559766  517451 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:58:53.573842  517451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:53.582733  517451 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:53.596754  517451 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:53.605913  517451 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:58:53.616137  517451 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:58:53.624191  517451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:58:53.773319  517451 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:58:54.047188  517451 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:58:54.047339  517451 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:58:54.055966  517451 start.go:574] Will wait 60s for crictl version
	I1227 20:58:54.056085  517451 ssh_runner.go:195] Run: which crictl
	I1227 20:58:54.060811  517451 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:58:54.102754  517451 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:58:54.102913  517451 ssh_runner.go:195] Run: crio --version
	I1227 20:58:54.141243  517451 ssh_runner.go:195] Run: crio --version
	I1227 20:58:54.204261  517451 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	W1227 20:58:50.768138  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	W1227 20:58:52.770965  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	I1227 20:58:54.207360  517451 cli_runner.go:164] Run: docker network inspect auto-037975 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:58:54.229829  517451 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 20:58:54.234321  517451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:58:54.247893  517451 kubeadm.go:884] updating cluster {Name:auto-037975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-037975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:58:54.248005  517451 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:58:54.248058  517451 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:58:54.315599  517451 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:58:54.315619  517451 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:58:54.315673  517451 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:58:54.359384  517451 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:58:54.359430  517451 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:58:54.359444  517451 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1227 20:58:54.359532  517451 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-037975 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:auto-037975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:58:54.359614  517451 ssh_runner.go:195] Run: crio config
	I1227 20:58:54.463486  517451 cni.go:84] Creating CNI manager for ""
	I1227 20:58:54.463556  517451 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:58:54.463586  517451 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:58:54.463646  517451 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-037975 NodeName:auto-037975 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:58:54.463827  517451 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-037975"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:58:54.463933  517451 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:58:54.473152  517451 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:58:54.473268  517451 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:58:54.480842  517451 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1227 20:58:54.496298  517451 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:58:54.509431  517451 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1227 20:58:54.522995  517451 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:58:54.527306  517451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:58:54.536953  517451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:58:54.694625  517451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:58:54.712372  517451 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975 for IP: 192.168.85.2
	I1227 20:58:54.712443  517451 certs.go:195] generating shared ca certs ...
	I1227 20:58:54.712473  517451 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:54.712647  517451 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:58:54.712728  517451 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:58:54.712767  517451 certs.go:257] generating profile certs ...
	I1227 20:58:54.712857  517451 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.key
	I1227 20:58:54.712899  517451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.crt with IP's: []
	I1227 20:58:55.013077  517451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.crt ...
	I1227 20:58:55.013118  517451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.crt: {Name:mkf046e2079b7fa075d1cd71697496ffbf7320ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:55.013396  517451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.key ...
	I1227 20:58:55.013417  517451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.key: {Name:mka7aa226a768ec29ad6e29bb6acc414ea10d550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:55.013635  517451 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.key.36204d99
	I1227 20:58:55.013676  517451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.crt.36204d99 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 20:58:55.419503  517451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.crt.36204d99 ...
	I1227 20:58:55.419535  517451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.crt.36204d99: {Name:mk777375fe379f76f324a2aeed9e5dc766756a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:55.419780  517451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.key.36204d99 ...
	I1227 20:58:55.419797  517451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.key.36204d99: {Name:mk27bcf8bc7f5d60a1f156b99ce7c02840ccfcff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:55.419921  517451 certs.go:382] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.crt.36204d99 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.crt
	I1227 20:58:55.420042  517451 certs.go:386] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.key.36204d99 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.key
	I1227 20:58:55.420127  517451 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/proxy-client.key
	I1227 20:58:55.420161  517451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/proxy-client.crt with IP's: []
	I1227 20:58:55.529731  517451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/proxy-client.crt ...
	I1227 20:58:55.529762  517451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/proxy-client.crt: {Name:mkc7aff75cc28455ac101bb11f055ca2ac7a54a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:55.529988  517451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/proxy-client.key ...
	I1227 20:58:55.530006  517451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/proxy-client.key: {Name:mkda095a66ed7eb88c3f28148a58d32bba3e9afa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:55.530241  517451 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:58:55.530303  517451 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:58:55.530319  517451 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:58:55.530362  517451 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:58:55.530410  517451 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:58:55.530445  517451 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:58:55.530511  517451 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:58:55.531109  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:58:55.551489  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:58:55.574961  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:58:55.597080  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:58:55.623581  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1227 20:58:55.646523  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:58:55.668899  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:58:55.692154  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:58:55.727072  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:58:55.770152  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:58:55.801474  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:58:55.836272  517451 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:58:55.867571  517451 ssh_runner.go:195] Run: openssl version
	I1227 20:58:55.877271  517451 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:58:55.885421  517451 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:58:55.893648  517451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:58:55.898738  517451 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:58:55.898842  517451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:58:55.944545  517451 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:58:55.951999  517451 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2743362.pem /etc/ssl/certs/3ec20f2e.0
	I1227 20:58:55.959356  517451 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:58:55.966504  517451 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:58:55.981995  517451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:58:55.988045  517451 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:58:55.988146  517451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:58:56.047379  517451 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:58:56.059974  517451 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 20:58:56.072972  517451 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:58:56.082852  517451 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:58:56.097178  517451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:58:56.101877  517451 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:58:56.101949  517451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:58:56.145582  517451 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:58:56.154691  517451 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/274336.pem /etc/ssl/certs/51391683.0
	I1227 20:58:56.162030  517451 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:58:56.166658  517451 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:58:56.166742  517451 kubeadm.go:401] StartCluster: {Name:auto-037975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-037975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:58:56.166848  517451 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:58:56.166911  517451 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:58:56.199556  517451 cri.go:96] found id: ""
	I1227 20:58:56.199675  517451 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:58:56.210340  517451 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 20:58:56.218082  517451 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:58:56.218172  517451 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:58:56.228484  517451 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:58:56.228506  517451 kubeadm.go:158] found existing configuration files:
	
	I1227 20:58:56.228584  517451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:58:56.237276  517451 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:58:56.237364  517451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:58:56.245116  517451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:58:56.254018  517451 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:58:56.254122  517451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:58:56.261657  517451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:58:56.275090  517451 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:58:56.275176  517451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:58:56.282975  517451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:58:56.291016  517451 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:58:56.291108  517451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:58:56.298702  517451 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:58:56.354993  517451 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:58:56.355417  517451 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:58:56.509324  517451 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:58:56.509469  517451 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 20:58:56.509535  517451 kubeadm.go:319] OS: Linux
	I1227 20:58:56.509608  517451 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:58:56.509687  517451 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 20:58:56.509758  517451 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:58:56.509836  517451 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:58:56.509910  517451 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:58:56.509989  517451 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:58:56.510059  517451 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:58:56.510134  517451 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:58:56.510203  517451 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 20:58:56.644391  517451 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:58:56.644562  517451 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:58:56.644707  517451 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:58:56.665862  517451 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1227 20:58:54.785251  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	W1227 20:58:57.268840  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	W1227 20:58:59.270919  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	I1227 20:58:56.672844  517451 out.go:252]   - Generating certificates and keys ...
	I1227 20:58:56.672973  517451 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:58:56.673097  517451 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:58:56.846204  517451 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 20:58:57.084661  517451 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 20:58:57.607441  517451 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 20:58:57.802796  517451 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 20:58:58.059688  517451 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 20:58:58.060315  517451 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-037975 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 20:58:58.583594  517451 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 20:58:58.584120  517451 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-037975 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 20:58:58.851092  517451 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 20:58:58.956217  517451 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 20:58:59.124104  517451 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 20:58:59.124443  517451 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:58:59.271366  517451 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:58:59.437822  517451 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:58:59.721183  517451 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:58:59.934928  517451 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:59:00.329319  517451 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:59:00.333733  517451 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:59:00.344081  517451 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:59:00.347706  517451 out.go:252]   - Booting up control plane ...
	I1227 20:59:00.347835  517451 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:59:00.347920  517451 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:59:00.349850  517451 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:59:00.369399  517451 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:59:00.369559  517451 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:59:00.381259  517451 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:59:00.381835  517451 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:59:00.381894  517451 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:59:00.568137  517451 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:59:00.568875  517451 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1227 20:59:01.769103  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	W1227 20:59:04.276706  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	I1227 20:59:01.571691  517451 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002744451s
	I1227 20:59:01.576012  517451 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 20:59:01.576160  517451 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1227 20:59:01.576286  517451 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 20:59:01.576397  517451 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 20:59:02.587384  517451 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.010163995s
	I1227 20:59:04.494389  517451 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.91841069s
	I1227 20:59:06.079271  517451 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502354568s
	I1227 20:59:06.119032  517451 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 20:59:06.145851  517451 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 20:59:06.160703  517451 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 20:59:06.161571  517451 kubeadm.go:319] [mark-control-plane] Marking the node auto-037975 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 20:59:06.179263  517451 kubeadm.go:319] [bootstrap-token] Using token: sslnvq.tedoe2sw894igal5
	I1227 20:59:06.182151  517451 out.go:252]   - Configuring RBAC rules ...
	I1227 20:59:06.182279  517451 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 20:59:06.191762  517451 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 20:59:06.203143  517451 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 20:59:06.207417  517451 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 20:59:06.211890  517451 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 20:59:06.218037  517451 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 20:59:06.489204  517451 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 20:59:06.924768  517451 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 20:59:07.490375  517451 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 20:59:07.491752  517451 kubeadm.go:319] 
	I1227 20:59:07.491831  517451 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 20:59:07.491843  517451 kubeadm.go:319] 
	I1227 20:59:07.491933  517451 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 20:59:07.491943  517451 kubeadm.go:319] 
	I1227 20:59:07.491969  517451 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 20:59:07.492030  517451 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 20:59:07.492108  517451 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 20:59:07.492122  517451 kubeadm.go:319] 
	I1227 20:59:07.492178  517451 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 20:59:07.492182  517451 kubeadm.go:319] 
	I1227 20:59:07.492229  517451 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 20:59:07.492233  517451 kubeadm.go:319] 
	I1227 20:59:07.492285  517451 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 20:59:07.492360  517451 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 20:59:07.492429  517451 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 20:59:07.492433  517451 kubeadm.go:319] 
	I1227 20:59:07.492517  517451 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 20:59:07.492595  517451 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 20:59:07.492598  517451 kubeadm.go:319] 
	I1227 20:59:07.492682  517451 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token sslnvq.tedoe2sw894igal5 \
	I1227 20:59:07.492799  517451 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ff29328d1e0d612c7979c16c69d6042f5f31e931d111cc12c8320ed4e4ab5152 \
	I1227 20:59:07.492821  517451 kubeadm.go:319] 	--control-plane 
	I1227 20:59:07.492825  517451 kubeadm.go:319] 
	I1227 20:59:07.492910  517451 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 20:59:07.492914  517451 kubeadm.go:319] 
	I1227 20:59:07.492996  517451 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token sslnvq.tedoe2sw894igal5 \
	I1227 20:59:07.493098  517451 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ff29328d1e0d612c7979c16c69d6042f5f31e931d111cc12c8320ed4e4ab5152 
	I1227 20:59:07.495538  517451 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 20:59:07.495953  517451 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:59:07.496066  517451 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:59:07.496086  517451 cni.go:84] Creating CNI manager for ""
	I1227 20:59:07.496094  517451 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:59:07.501054  517451 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1227 20:59:06.768105  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	W1227 20:59:09.269300  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	I1227 20:59:07.503871  517451 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 20:59:07.508152  517451 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 20:59:07.508176  517451 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 20:59:07.526195  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 20:59:08.293162  517451 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 20:59:08.293307  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:59:08.293384  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-037975 minikube.k8s.io/updated_at=2025_12_27T20_59_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562 minikube.k8s.io/name=auto-037975 minikube.k8s.io/primary=true
	I1227 20:59:08.444823  517451 ops.go:34] apiserver oom_adj: -16
	I1227 20:59:08.444945  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:59:08.945595  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:59:09.445743  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:59:09.945217  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:59:10.445088  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:59:10.945112  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:59:11.445599  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:59:11.945654  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:59:12.445485  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:59:12.596902  517451 kubeadm.go:1114] duration metric: took 4.303656264s to wait for elevateKubeSystemPrivileges
	I1227 20:59:12.596951  517451 kubeadm.go:403] duration metric: took 16.430208593s to StartCluster
	I1227 20:59:12.596969  517451 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:59:12.597042  517451 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:59:12.598152  517451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:59:12.598394  517451 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:59:12.598530  517451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 20:59:12.598842  517451 config.go:182] Loaded profile config "auto-037975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:59:12.598891  517451 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:59:12.598964  517451 addons.go:70] Setting storage-provisioner=true in profile "auto-037975"
	I1227 20:59:12.598990  517451 addons.go:239] Setting addon storage-provisioner=true in "auto-037975"
	I1227 20:59:12.599018  517451 host.go:66] Checking if "auto-037975" exists ...
	I1227 20:59:12.599894  517451 cli_runner.go:164] Run: docker container inspect auto-037975 --format={{.State.Status}}
	I1227 20:59:12.600299  517451 addons.go:70] Setting default-storageclass=true in profile "auto-037975"
	I1227 20:59:12.600334  517451 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-037975"
	I1227 20:59:12.600603  517451 cli_runner.go:164] Run: docker container inspect auto-037975 --format={{.State.Status}}
	I1227 20:59:12.602363  517451 out.go:179] * Verifying Kubernetes components...
	I1227 20:59:12.605924  517451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:59:12.643075  517451 addons.go:239] Setting addon default-storageclass=true in "auto-037975"
	I1227 20:59:12.643126  517451 host.go:66] Checking if "auto-037975" exists ...
	I1227 20:59:12.643527  517451 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:59:12.643753  517451 cli_runner.go:164] Run: docker container inspect auto-037975 --format={{.State.Status}}
	I1227 20:59:12.646639  517451 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:59:12.646660  517451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:59:12.646732  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:59:12.696221  517451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/auto-037975/id_rsa Username:docker}
	I1227 20:59:12.700733  517451 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:59:12.700755  517451 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:59:12.700814  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:59:12.729873  517451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/auto-037975/id_rsa Username:docker}
	I1227 20:59:12.980086  517451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 20:59:12.980237  517451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:59:13.007170  517451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:59:13.064792  517451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:59:13.755512  517451 node_ready.go:35] waiting up to 15m0s for node "auto-037975" to be "Ready" ...
	I1227 20:59:13.755933  517451 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1227 20:59:14.105416  517451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.098211465s)
	I1227 20:59:14.105513  517451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.040651974s)
	I1227 20:59:14.116493  517451 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1227 20:59:11.769789  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	W1227 20:59:13.777664  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	I1227 20:59:14.119543  517451 addons.go:530] duration metric: took 1.520649283s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 20:59:14.260273  517451 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-037975" context rescaled to 1 replicas
	W1227 20:59:15.758975  517451 node_ready.go:57] node "auto-037975" has "Ready":"False" status (will retry)
	W1227 20:59:16.272294  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	W1227 20:59:18.768047  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	W1227 20:59:18.258192  517451 node_ready.go:57] node "auto-037975" has "Ready":"False" status (will retry)
	W1227 20:59:20.261363  517451 node_ready.go:57] node "auto-037975" has "Ready":"False" status (will retry)
	I1227 20:59:19.767607  515650 pod_ready.go:94] pod "coredns-7d764666f9-p7xs9" is "Ready"
	I1227 20:59:19.767638  515650 pod_ready.go:86] duration metric: took 33.504826713s for pod "coredns-7d764666f9-p7xs9" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:19.770064  515650 pod_ready.go:83] waiting for pod "etcd-no-preload-542467" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:19.774154  515650 pod_ready.go:94] pod "etcd-no-preload-542467" is "Ready"
	I1227 20:59:19.774180  515650 pod_ready.go:86] duration metric: took 4.090261ms for pod "etcd-no-preload-542467" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:19.776171  515650 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-542467" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:19.780428  515650 pod_ready.go:94] pod "kube-apiserver-no-preload-542467" is "Ready"
	I1227 20:59:19.780455  515650 pod_ready.go:86] duration metric: took 4.259708ms for pod "kube-apiserver-no-preload-542467" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:19.782758  515650 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-542467" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:19.965939  515650 pod_ready.go:94] pod "kube-controller-manager-no-preload-542467" is "Ready"
	I1227 20:59:19.965971  515650 pod_ready.go:86] duration metric: took 183.192418ms for pod "kube-controller-manager-no-preload-542467" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:20.166274  515650 pod_ready.go:83] waiting for pod "kube-proxy-7mx96" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:20.566283  515650 pod_ready.go:94] pod "kube-proxy-7mx96" is "Ready"
	I1227 20:59:20.566310  515650 pod_ready.go:86] duration metric: took 400.002561ms for pod "kube-proxy-7mx96" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:20.766019  515650 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-542467" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:21.165532  515650 pod_ready.go:94] pod "kube-scheduler-no-preload-542467" is "Ready"
	I1227 20:59:21.165558  515650 pod_ready.go:86] duration metric: took 399.515592ms for pod "kube-scheduler-no-preload-542467" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:21.165570  515650 pod_ready.go:40] duration metric: took 34.906269536s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:59:21.219181  515650 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 20:59:21.222440  515650 out.go:203] 
	W1227 20:59:21.225360  515650 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 20:59:21.228332  515650 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:59:21.231534  515650 out.go:179] * Done! kubectl is now configured to use "no-preload-542467" cluster and "default" namespace by default
	W1227 20:59:22.759229  517451 node_ready.go:57] node "auto-037975" has "Ready":"False" status (will retry)
	W1227 20:59:25.259284  517451 node_ready.go:57] node "auto-037975" has "Ready":"False" status (will retry)
	I1227 20:59:26.278132  517451 node_ready.go:49] node "auto-037975" is "Ready"
	I1227 20:59:26.278159  517451 node_ready.go:38] duration metric: took 12.522564557s for node "auto-037975" to be "Ready" ...
	I1227 20:59:26.278173  517451 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:59:26.278228  517451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:59:26.297803  517451 api_server.go:72] duration metric: took 13.699373969s to wait for apiserver process to appear ...
	I1227 20:59:26.297825  517451 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:59:26.297844  517451 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 20:59:26.311651  517451 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 20:59:26.312872  517451 api_server.go:141] control plane version: v1.35.0
	I1227 20:59:26.312896  517451 api_server.go:131] duration metric: took 15.063559ms to wait for apiserver health ...
	I1227 20:59:26.312905  517451 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:59:26.316170  517451 system_pods.go:59] 8 kube-system pods found
	I1227 20:59:26.316207  517451 system_pods.go:61] "coredns-7d764666f9-rkj2k" [f6beeeae-d175-46f3-a73e-8023b7663622] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:59:26.316214  517451 system_pods.go:61] "etcd-auto-037975" [70f7cd8f-e106-4135-a7d9-58b42ee396b8] Running
	I1227 20:59:26.316222  517451 system_pods.go:61] "kindnet-ccqpq" [8f504a6b-6ab1-47e2-9117-ee5b272539df] Running
	I1227 20:59:26.316226  517451 system_pods.go:61] "kube-apiserver-auto-037975" [ccd46d26-4414-4fd0-a2d9-8b8041cf2efb] Running
	I1227 20:59:26.316230  517451 system_pods.go:61] "kube-controller-manager-auto-037975" [1a8e3761-f836-4208-862e-c95875274cb0] Running
	I1227 20:59:26.316235  517451 system_pods.go:61] "kube-proxy-jp6cb" [bb840391-13cb-4762-81be-b759ccff79e8] Running
	I1227 20:59:26.316242  517451 system_pods.go:61] "kube-scheduler-auto-037975" [785266d0-11b5-4efb-90c0-3acb68d15ef7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:59:26.316256  517451 system_pods.go:61] "storage-provisioner" [0926543f-2839-40a5-92c4-fb237a5834ae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:59:26.316273  517451 system_pods.go:74] duration metric: took 3.353493ms to wait for pod list to return data ...
	I1227 20:59:26.316281  517451 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:59:26.320163  517451 default_sa.go:45] found service account: "default"
	I1227 20:59:26.320182  517451 default_sa.go:55] duration metric: took 3.894828ms for default service account to be created ...
	I1227 20:59:26.320191  517451 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:59:26.323132  517451 system_pods.go:86] 8 kube-system pods found
	I1227 20:59:26.323204  517451 system_pods.go:89] "coredns-7d764666f9-rkj2k" [f6beeeae-d175-46f3-a73e-8023b7663622] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:59:26.323228  517451 system_pods.go:89] "etcd-auto-037975" [70f7cd8f-e106-4135-a7d9-58b42ee396b8] Running
	I1227 20:59:26.323266  517451 system_pods.go:89] "kindnet-ccqpq" [8f504a6b-6ab1-47e2-9117-ee5b272539df] Running
	I1227 20:59:26.323292  517451 system_pods.go:89] "kube-apiserver-auto-037975" [ccd46d26-4414-4fd0-a2d9-8b8041cf2efb] Running
	I1227 20:59:26.323315  517451 system_pods.go:89] "kube-controller-manager-auto-037975" [1a8e3761-f836-4208-862e-c95875274cb0] Running
	I1227 20:59:26.323339  517451 system_pods.go:89] "kube-proxy-jp6cb" [bb840391-13cb-4762-81be-b759ccff79e8] Running
	I1227 20:59:26.323374  517451 system_pods.go:89] "kube-scheduler-auto-037975" [785266d0-11b5-4efb-90c0-3acb68d15ef7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:59:26.323402  517451 system_pods.go:89] "storage-provisioner" [0926543f-2839-40a5-92c4-fb237a5834ae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:59:26.323455  517451 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1227 20:59:26.600244  517451 system_pods.go:86] 8 kube-system pods found
	I1227 20:59:26.600332  517451 system_pods.go:89] "coredns-7d764666f9-rkj2k" [f6beeeae-d175-46f3-a73e-8023b7663622] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:59:26.600377  517451 system_pods.go:89] "etcd-auto-037975" [70f7cd8f-e106-4135-a7d9-58b42ee396b8] Running
	I1227 20:59:26.600409  517451 system_pods.go:89] "kindnet-ccqpq" [8f504a6b-6ab1-47e2-9117-ee5b272539df] Running
	I1227 20:59:26.600432  517451 system_pods.go:89] "kube-apiserver-auto-037975" [ccd46d26-4414-4fd0-a2d9-8b8041cf2efb] Running
	I1227 20:59:26.600457  517451 system_pods.go:89] "kube-controller-manager-auto-037975" [1a8e3761-f836-4208-862e-c95875274cb0] Running
	I1227 20:59:26.600492  517451 system_pods.go:89] "kube-proxy-jp6cb" [bb840391-13cb-4762-81be-b759ccff79e8] Running
	I1227 20:59:26.600528  517451 system_pods.go:89] "kube-scheduler-auto-037975" [785266d0-11b5-4efb-90c0-3acb68d15ef7] Running
	I1227 20:59:26.600552  517451 system_pods.go:89] "storage-provisioner" [0926543f-2839-40a5-92c4-fb237a5834ae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:59:26.867675  517451 system_pods.go:86] 8 kube-system pods found
	I1227 20:59:26.867719  517451 system_pods.go:89] "coredns-7d764666f9-rkj2k" [f6beeeae-d175-46f3-a73e-8023b7663622] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:59:26.867726  517451 system_pods.go:89] "etcd-auto-037975" [70f7cd8f-e106-4135-a7d9-58b42ee396b8] Running
	I1227 20:59:26.867734  517451 system_pods.go:89] "kindnet-ccqpq" [8f504a6b-6ab1-47e2-9117-ee5b272539df] Running
	I1227 20:59:26.867738  517451 system_pods.go:89] "kube-apiserver-auto-037975" [ccd46d26-4414-4fd0-a2d9-8b8041cf2efb] Running
	I1227 20:59:26.867743  517451 system_pods.go:89] "kube-controller-manager-auto-037975" [1a8e3761-f836-4208-862e-c95875274cb0] Running
	I1227 20:59:26.867748  517451 system_pods.go:89] "kube-proxy-jp6cb" [bb840391-13cb-4762-81be-b759ccff79e8] Running
	I1227 20:59:26.867752  517451 system_pods.go:89] "kube-scheduler-auto-037975" [785266d0-11b5-4efb-90c0-3acb68d15ef7] Running
	I1227 20:59:26.867758  517451 system_pods.go:89] "storage-provisioner" [0926543f-2839-40a5-92c4-fb237a5834ae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:59:27.341359  517451 system_pods.go:86] 8 kube-system pods found
	I1227 20:59:27.341393  517451 system_pods.go:89] "coredns-7d764666f9-rkj2k" [f6beeeae-d175-46f3-a73e-8023b7663622] Running
	I1227 20:59:27.341401  517451 system_pods.go:89] "etcd-auto-037975" [70f7cd8f-e106-4135-a7d9-58b42ee396b8] Running
	I1227 20:59:27.341406  517451 system_pods.go:89] "kindnet-ccqpq" [8f504a6b-6ab1-47e2-9117-ee5b272539df] Running
	I1227 20:59:27.341410  517451 system_pods.go:89] "kube-apiserver-auto-037975" [ccd46d26-4414-4fd0-a2d9-8b8041cf2efb] Running
	I1227 20:59:27.341415  517451 system_pods.go:89] "kube-controller-manager-auto-037975" [1a8e3761-f836-4208-862e-c95875274cb0] Running
	I1227 20:59:27.341421  517451 system_pods.go:89] "kube-proxy-jp6cb" [bb840391-13cb-4762-81be-b759ccff79e8] Running
	I1227 20:59:27.341426  517451 system_pods.go:89] "kube-scheduler-auto-037975" [785266d0-11b5-4efb-90c0-3acb68d15ef7] Running
	I1227 20:59:27.341432  517451 system_pods.go:89] "storage-provisioner" [0926543f-2839-40a5-92c4-fb237a5834ae] Running
	I1227 20:59:27.341439  517451 system_pods.go:126] duration metric: took 1.021242747s to wait for k8s-apps to be running ...
	I1227 20:59:27.341473  517451 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:59:27.341539  517451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:59:27.353846  517451 system_svc.go:56] duration metric: took 12.364548ms WaitForService to wait for kubelet
	I1227 20:59:27.353874  517451 kubeadm.go:587] duration metric: took 14.755448997s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:59:27.353892  517451 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:59:27.356663  517451 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:59:27.356691  517451 node_conditions.go:123] node cpu capacity is 2
	I1227 20:59:27.356710  517451 node_conditions.go:105] duration metric: took 2.813151ms to run NodePressure ...
	I1227 20:59:27.356722  517451 start.go:242] waiting for startup goroutines ...
	I1227 20:59:27.356729  517451 start.go:247] waiting for cluster config update ...
	I1227 20:59:27.356741  517451 start.go:256] writing updated cluster config ...
	I1227 20:59:27.357018  517451 ssh_runner.go:195] Run: rm -f paused
	I1227 20:59:27.360332  517451 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:59:27.365225  517451 pod_ready.go:83] waiting for pod "coredns-7d764666f9-rkj2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:27.369804  517451 pod_ready.go:94] pod "coredns-7d764666f9-rkj2k" is "Ready"
	I1227 20:59:27.369830  517451 pod_ready.go:86] duration metric: took 4.582162ms for pod "coredns-7d764666f9-rkj2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:27.372005  517451 pod_ready.go:83] waiting for pod "etcd-auto-037975" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:27.376491  517451 pod_ready.go:94] pod "etcd-auto-037975" is "Ready"
	I1227 20:59:27.376554  517451 pod_ready.go:86] duration metric: took 4.524793ms for pod "etcd-auto-037975" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:27.378791  517451 pod_ready.go:83] waiting for pod "kube-apiserver-auto-037975" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:27.383060  517451 pod_ready.go:94] pod "kube-apiserver-auto-037975" is "Ready"
	I1227 20:59:27.383086  517451 pod_ready.go:86] duration metric: took 4.244767ms for pod "kube-apiserver-auto-037975" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:27.385246  517451 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-037975" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:27.766342  517451 pod_ready.go:94] pod "kube-controller-manager-auto-037975" is "Ready"
	I1227 20:59:27.766373  517451 pod_ready.go:86] duration metric: took 381.10168ms for pod "kube-controller-manager-auto-037975" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:27.966664  517451 pod_ready.go:83] waiting for pod "kube-proxy-jp6cb" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:28.366575  517451 pod_ready.go:94] pod "kube-proxy-jp6cb" is "Ready"
	I1227 20:59:28.366601  517451 pod_ready.go:86] duration metric: took 399.910215ms for pod "kube-proxy-jp6cb" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:28.565778  517451 pod_ready.go:83] waiting for pod "kube-scheduler-auto-037975" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:28.965200  517451 pod_ready.go:94] pod "kube-scheduler-auto-037975" is "Ready"
	I1227 20:59:28.965283  517451 pod_ready.go:86] duration metric: took 399.473329ms for pod "kube-scheduler-auto-037975" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:28.965318  517451 pod_ready.go:40] duration metric: took 1.604940456s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:59:29.023449  517451 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 20:59:29.026751  517451 out.go:203] 
	W1227 20:59:29.029647  517451 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 20:59:29.032539  517451 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:59:29.035487  517451 out.go:179] * Done! kubectl is now configured to use "auto-037975" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.626849941Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.629932155Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.629966435Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.629989664Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.633029393Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.633062901Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.633088739Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.637321346Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.63735675Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.637397914Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.642380016Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.64241405Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.925229237Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=82308411-462a-4fe7-a229-4696ae473445 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.926571363Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=50617e42-81ae-46ed-b988-a1f49c0d9959 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.929933413Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9/dashboard-metrics-scraper" id=691caa11-e914-4133-bbee-b79edf878164 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.930038379Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.937129772Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.937895044Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.95247691Z" level=info msg="Created container 8042f28b87a726dade2ba4ff6db74c840e2d9ebdd3cd33f55037fcc0e835344e: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9/dashboard-metrics-scraper" id=691caa11-e914-4133-bbee-b79edf878164 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.954958894Z" level=info msg="Starting container: 8042f28b87a726dade2ba4ff6db74c840e2d9ebdd3cd33f55037fcc0e835344e" id=e901e4b9-ea46-45b1-a8ac-e96064923bf3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.959866347Z" level=info msg="Started container" PID=1725 containerID=8042f28b87a726dade2ba4ff6db74c840e2d9ebdd3cd33f55037fcc0e835344e description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9/dashboard-metrics-scraper id=e901e4b9-ea46-45b1-a8ac-e96064923bf3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e180ede4a28a83ced917078f73b9c474806938f444febea95033b3297e6545a6
	Dec 27 20:59:25 no-preload-542467 conmon[1723]: conmon 8042f28b87a726dade2b <ninfo>: container 1725 exited with status 1
	Dec 27 20:59:26 no-preload-542467 crio[655]: time="2025-12-27T20:59:26.254919793Z" level=info msg="Removing container: 56b7a73675c4449acdc2e70139dd44f2bdf44ab13102bf6e7bcddf97e4adf5b7" id=2f119be4-b9ab-4c70-84d3-470714e6e635 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:59:26 no-preload-542467 crio[655]: time="2025-12-27T20:59:26.279862413Z" level=info msg="Error loading conmon cgroup of container 56b7a73675c4449acdc2e70139dd44f2bdf44ab13102bf6e7bcddf97e4adf5b7: cgroup deleted" id=2f119be4-b9ab-4c70-84d3-470714e6e635 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:59:26 no-preload-542467 crio[655]: time="2025-12-27T20:59:26.28991682Z" level=info msg="Removed container 56b7a73675c4449acdc2e70139dd44f2bdf44ab13102bf6e7bcddf97e4adf5b7: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9/dashboard-metrics-scraper" id=2f119be4-b9ab-4c70-84d3-470714e6e635 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8042f28b87a72       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago      Exited              dashboard-metrics-scraper   3                   e180ede4a28a8       dashboard-metrics-scraper-867fb5f87b-ztnm9   kubernetes-dashboard
	e00bd10efcab4       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           19 seconds ago      Running             storage-provisioner         2                   43df2336db731       storage-provisioner                          kube-system
	0b5f1122d2bae       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   36 seconds ago      Running             kubernetes-dashboard        0                   7c7c08324fe10       kubernetes-dashboard-b84665fb8-mhlrk         kubernetes-dashboard
	496bbc1fc440e       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           51 seconds ago      Running             coredns                     1                   6a52d515735db       coredns-7d764666f9-p7xs9                     kube-system
	858e54c5f6e5f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   c01c147c22f9e       busybox                                      default
	484a2b95e52a7       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           51 seconds ago      Running             kindnet-cni                 1                   c2401cd248dde       kindnet-2v4p8                                kube-system
	be5c6226a6047       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           51 seconds ago      Exited              storage-provisioner         1                   43df2336db731       storage-provisioner                          kube-system
	bc0280cd97160       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           51 seconds ago      Running             kube-proxy                  1                   5e6ee1f427ce3       kube-proxy-7mx96                             kube-system
	66e8f829d9c3d       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           57 seconds ago      Running             etcd                        1                   52a060879f70e       etcd-no-preload-542467                       kube-system
	c19a656202dee       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           57 seconds ago      Running             kube-apiserver              1                   27c016ee7491f       kube-apiserver-no-preload-542467             kube-system
	161b43e94648c       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           57 seconds ago      Running             kube-controller-manager     1                   08fca48ad28bc       kube-controller-manager-no-preload-542467    kube-system
	87c12835ffb38       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           57 seconds ago      Running             kube-scheduler              1                   578b2ad328b89       kube-scheduler-no-preload-542467             kube-system
	
	
	==> coredns [496bbc1fc440e887ec74fe08c1f48d36953507a7dc1d003f4353ff7944432c2d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:38146 - 27015 "HINFO IN 2764909140976315041.6274439778967615360. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005504893s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-542467
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-542467
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=no-preload-542467
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_57_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:57:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-542467
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:59:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:59:14 +0000   Sat, 27 Dec 2025 20:57:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:59:14 +0000   Sat, 27 Dec 2025 20:57:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:59:14 +0000   Sat, 27 Dec 2025 20:57:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:59:14 +0000   Sat, 27 Dec 2025 20:57:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-542467
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                965c0b17-6aea-4550-9015-e80b58ef7dfe
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-7d764666f9-p7xs9                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-no-preload-542467                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         117s
	  kube-system                 kindnet-2v4p8                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-no-preload-542467              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-no-preload-542467     200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-7mx96                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-no-preload-542467              100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-ztnm9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-mhlrk          0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  114s  node-controller  Node no-preload-542467 event: Registered Node no-preload-542467 in Controller
	  Normal  RegisteredNode  49s   node-controller  Node no-preload-542467 event: Registered Node no-preload-542467 in Controller
	
	
	==> dmesg <==
	[Dec27 20:27] overlayfs: idmapped layers are currently not supported
	[  +6.770645] overlayfs: idmapped layers are currently not supported
	[Dec27 20:28] overlayfs: idmapped layers are currently not supported
	[ +25.872751] overlayfs: idmapped layers are currently not supported
	[Dec27 20:29] overlayfs: idmapped layers are currently not supported
	[ +32.997137] overlayfs: idmapped layers are currently not supported
	[Dec27 20:31] overlayfs: idmapped layers are currently not supported
	[Dec27 20:33] overlayfs: idmapped layers are currently not supported
	[ +33.772475] overlayfs: idmapped layers are currently not supported
	[Dec27 20:39] overlayfs: idmapped layers are currently not supported
	[Dec27 20:40] overlayfs: idmapped layers are currently not supported
	[Dec27 20:44] overlayfs: idmapped layers are currently not supported
	[Dec27 20:45] overlayfs: idmapped layers are currently not supported
	[Dec27 20:49] overlayfs: idmapped layers are currently not supported
	[Dec27 20:50] overlayfs: idmapped layers are currently not supported
	[Dec27 20:51] overlayfs: idmapped layers are currently not supported
	[Dec27 20:52] overlayfs: idmapped layers are currently not supported
	[Dec27 20:53] overlayfs: idmapped layers are currently not supported
	[Dec27 20:55] overlayfs: idmapped layers are currently not supported
	[ +57.272039] overlayfs: idmapped layers are currently not supported
	[Dec27 20:57] overlayfs: idmapped layers are currently not supported
	[ +34.093681] overlayfs: idmapped layers are currently not supported
	[Dec27 20:58] overlayfs: idmapped layers are currently not supported
	[ +25.264982] overlayfs: idmapped layers are currently not supported
	[Dec27 20:59] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [66e8f829d9c3d364d238135636c405f3c6255104b333cb219ec600934ec6abd0] <==
	{"level":"info","ts":"2025-12-27T20:58:39.407284Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T20:58:39.407294Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T20:58:39.408314Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T20:58:39.408376Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T20:58:39.408431Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T20:58:39.495429Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:58:39.495476Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:58:39.495523Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:58:39.495538Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:58:39.495553Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:58:39.498063Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:58:39.498130Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:58:39.498149Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:58:39.498159Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:58:39.509697Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-542467 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:58:39.509836Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:58:39.510747Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:58:39.542703Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T20:58:39.546950Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:58:39.547003Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:58:39.561482Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:58:39.562526Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:58:39.563392Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-12-27T20:58:45.138911Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.723837ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:endpointslice-controller\" limit:1 ","response":"range_response_count:1 size:771"}
	{"level":"info","ts":"2025-12-27T20:58:45.139069Z","caller":"traceutil/trace.go:172","msg":"trace[1795043196] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:endpointslice-controller; range_end:; response_count:1; response_revision:534; }","duration":"184.948816ms","start":"2025-12-27T20:58:44.954102Z","end":"2025-12-27T20:58:45.139051Z","steps":["trace[1795043196] 'range keys from bolt db'  (duration: 184.046162ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:59:36 up  2:42,  0 user,  load average: 3.40, 2.48, 2.05
	Linux no-preload-542467 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [484a2b95e52a778cc7edd0ec75e04dd23bf1af7cd989cadb7f6465f524fdddc5] <==
	I1227 20:58:45.287731       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:58:45.331432       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 20:58:45.338542       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:58:45.338642       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:58:45.338696       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:58:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:58:45.630429       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:58:45.630459       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:58:45.630468       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:58:45.630564       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 20:59:15.624985       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 20:59:15.626348       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 20:59:15.630907       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 20:59:15.631017       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1227 20:59:17.231484       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:59:17.231523       1 metrics.go:72] Registering metrics
	I1227 20:59:17.231588       1 controller.go:711] "Syncing nftables rules"
	I1227 20:59:25.622087       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:59:25.622128       1 main.go:301] handling current node
	I1227 20:59:35.622137       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:59:35.622172       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c19a656202deee3c031169a10d16ce0309d87ad8c5c40f4fe78c299c16484dfb] <==
	I1227 20:58:43.577784       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 20:58:43.578522       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 20:58:43.579278       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 20:58:43.579884       1 aggregator.go:187] initial CRD sync complete...
	I1227 20:58:43.579894       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 20:58:43.579900       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:58:43.579906       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:58:43.590179       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 20:58:43.590299       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 20:58:43.590312       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 20:58:43.590398       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:43.594651       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:58:43.620780       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1227 20:58:43.725350       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 20:58:43.790709       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:58:43.993963       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:58:45.439151       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:58:45.683787       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:58:45.810998       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:58:45.849944       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:58:46.036536       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.37.151"}
	I1227 20:58:46.089959       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.180.90"}
	I1227 20:58:47.353002       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:58:47.724877       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:58:47.794406       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [161b43e94648c1d5a060e54751a6efa923997153605bc1c7b6e51c556ac8e5bf] <==
	I1227 20:58:47.284905       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.284913       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.284919       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.284928       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.284935       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.304882       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.285055       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.306088       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.315759       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-542467"
	I1227 20:58:47.306105       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.306112       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.306122       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.306162       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.306181       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.306074       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.316107       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 20:58:47.306137       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.306175       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.300883       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.316573       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.306130       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.391878       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.406716       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.407297       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:58:47.407348       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [bc0280cd97160303d86d9f47745f988149d592e81902cb53c092add4f5fb263b] <==
	I1227 20:58:45.523160       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:58:45.736945       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:58:45.837196       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:45.837227       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 20:58:45.837297       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:58:45.902453       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:58:45.902568       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:58:45.909809       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:58:45.910191       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:58:45.910350       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:58:45.912885       1 config.go:200] "Starting service config controller"
	I1227 20:58:45.912957       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:58:45.913039       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:58:45.913074       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:58:45.913111       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:58:45.913145       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:58:45.915012       1 config.go:309] "Starting node config controller"
	I1227 20:58:45.915079       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:58:45.915108       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:58:46.018156       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 20:58:46.018194       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:58:46.018231       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [87c12835ffb381b6fd21e4708c09054907f3433d8bb1508c5f509e1d6dfef79b] <==
	I1227 20:58:40.160866       1 serving.go:386] Generated self-signed cert in-memory
	W1227 20:58:43.066750       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 20:58:43.066846       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 20:58:43.066907       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 20:58:43.066940       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 20:58:43.554836       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 20:58:43.554944       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:58:43.570331       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 20:58:43.589532       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:58:43.589567       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:58:43.589328       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 20:58:43.702036       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:59:01 no-preload-542467 kubelet[779]: E1227 20:59:01.181709     779 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-mhlrk" containerName="kubernetes-dashboard"
	Dec 27 20:59:03 no-preload-542467 kubelet[779]: E1227 20:59:03.924329     779 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9" containerName="dashboard-metrics-scraper"
	Dec 27 20:59:03 no-preload-542467 kubelet[779]: I1227 20:59:03.924380     779 scope.go:122] "RemoveContainer" containerID="28ec85aa83fbb60c8eb0a61fc25a3bcca09ba5cf4d98767929c9b737b0932c19"
	Dec 27 20:59:04 no-preload-542467 kubelet[779]: I1227 20:59:04.189126     779 scope.go:122] "RemoveContainer" containerID="28ec85aa83fbb60c8eb0a61fc25a3bcca09ba5cf4d98767929c9b737b0932c19"
	Dec 27 20:59:04 no-preload-542467 kubelet[779]: E1227 20:59:04.189502     779 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9" containerName="dashboard-metrics-scraper"
	Dec 27 20:59:04 no-preload-542467 kubelet[779]: I1227 20:59:04.189541     779 scope.go:122] "RemoveContainer" containerID="56b7a73675c4449acdc2e70139dd44f2bdf44ab13102bf6e7bcddf97e4adf5b7"
	Dec 27 20:59:04 no-preload-542467 kubelet[779]: E1227 20:59:04.189709     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ztnm9_kubernetes-dashboard(42215b40-cb34-4731-b959-ac8858b9baaa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9" podUID="42215b40-cb34-4731-b959-ac8858b9baaa"
	Dec 27 20:59:04 no-preload-542467 kubelet[779]: I1227 20:59:04.226607     779 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-mhlrk" podStartSLOduration=6.380394049 podStartE2EDuration="17.226582995s" podCreationTimestamp="2025-12-27 20:58:47 +0000 UTC" firstStartedPulling="2025-12-27 20:58:48.314530606 +0000 UTC m=+10.576236418" lastFinishedPulling="2025-12-27 20:58:59.160719544 +0000 UTC m=+21.422425364" observedRunningTime="2025-12-27 20:59:00.206423853 +0000 UTC m=+22.468129665" watchObservedRunningTime="2025-12-27 20:59:04.226582995 +0000 UTC m=+26.488288807"
	Dec 27 20:59:08 no-preload-542467 kubelet[779]: E1227 20:59:08.242277     779 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9" containerName="dashboard-metrics-scraper"
	Dec 27 20:59:08 no-preload-542467 kubelet[779]: I1227 20:59:08.242766     779 scope.go:122] "RemoveContainer" containerID="56b7a73675c4449acdc2e70139dd44f2bdf44ab13102bf6e7bcddf97e4adf5b7"
	Dec 27 20:59:08 no-preload-542467 kubelet[779]: E1227 20:59:08.243020     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ztnm9_kubernetes-dashboard(42215b40-cb34-4731-b959-ac8858b9baaa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9" podUID="42215b40-cb34-4731-b959-ac8858b9baaa"
	Dec 27 20:59:16 no-preload-542467 kubelet[779]: I1227 20:59:16.226390     779 scope.go:122] "RemoveContainer" containerID="be5c6226a604721581fbe9641759c616cf40b5597d34625ad5321668ad3f5a6f"
	Dec 27 20:59:19 no-preload-542467 kubelet[779]: E1227 20:59:19.406578     779 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p7xs9" containerName="coredns"
	Dec 27 20:59:25 no-preload-542467 kubelet[779]: E1227 20:59:25.924648     779 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9" containerName="dashboard-metrics-scraper"
	Dec 27 20:59:25 no-preload-542467 kubelet[779]: I1227 20:59:25.924694     779 scope.go:122] "RemoveContainer" containerID="56b7a73675c4449acdc2e70139dd44f2bdf44ab13102bf6e7bcddf97e4adf5b7"
	Dec 27 20:59:26 no-preload-542467 kubelet[779]: I1227 20:59:26.252792     779 scope.go:122] "RemoveContainer" containerID="56b7a73675c4449acdc2e70139dd44f2bdf44ab13102bf6e7bcddf97e4adf5b7"
	Dec 27 20:59:26 no-preload-542467 kubelet[779]: E1227 20:59:26.253094     779 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9" containerName="dashboard-metrics-scraper"
	Dec 27 20:59:26 no-preload-542467 kubelet[779]: I1227 20:59:26.253121     779 scope.go:122] "RemoveContainer" containerID="8042f28b87a726dade2ba4ff6db74c840e2d9ebdd3cd33f55037fcc0e835344e"
	Dec 27 20:59:26 no-preload-542467 kubelet[779]: E1227 20:59:26.253262     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ztnm9_kubernetes-dashboard(42215b40-cb34-4731-b959-ac8858b9baaa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9" podUID="42215b40-cb34-4731-b959-ac8858b9baaa"
	Dec 27 20:59:28 no-preload-542467 kubelet[779]: E1227 20:59:28.242505     779 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9" containerName="dashboard-metrics-scraper"
	Dec 27 20:59:28 no-preload-542467 kubelet[779]: I1227 20:59:28.242980     779 scope.go:122] "RemoveContainer" containerID="8042f28b87a726dade2ba4ff6db74c840e2d9ebdd3cd33f55037fcc0e835344e"
	Dec 27 20:59:28 no-preload-542467 kubelet[779]: E1227 20:59:28.243208     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ztnm9_kubernetes-dashboard(42215b40-cb34-4731-b959-ac8858b9baaa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9" podUID="42215b40-cb34-4731-b959-ac8858b9baaa"
	Dec 27 20:59:33 no-preload-542467 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:59:33 no-preload-542467 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:59:33 no-preload-542467 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [0b5f1122d2bae474f14bb22c11d66a7c0b17063ca9cd0fabf5651aa48608c872] <==
	2025/12/27 20:58:59 Using namespace: kubernetes-dashboard
	2025/12/27 20:58:59 Using in-cluster config to connect to apiserver
	2025/12/27 20:58:59 Using secret token for csrf signing
	2025/12/27 20:58:59 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 20:58:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 20:58:59 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 20:58:59 Generating JWE encryption key
	2025/12/27 20:58:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 20:58:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 20:58:59 Initializing JWE encryption key from synchronized object
	2025/12/27 20:58:59 Creating in-cluster Sidecar client
	2025/12/27 20:58:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:58:59 Serving insecurely on HTTP port: 9090
	2025/12/27 20:59:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:58:59 Starting overwatch
	
	
	==> storage-provisioner [be5c6226a604721581fbe9641759c616cf40b5597d34625ad5321668ad3f5a6f] <==
	I1227 20:58:45.425145       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 20:59:15.427378       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e00bd10efcab4d9ccd6d7493ae80baa4f6b32616652432abfbc287b063e25f59] <==
	I1227 20:59:16.283196       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 20:59:16.294146       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:59:16.294198       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 20:59:16.296884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:59:19.751753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:59:24.013085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:59:27.612226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:59:30.665895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:59:33.688293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:59:33.695713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:59:33.695867       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:59:33.696025       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-542467_8ebc9a03-a8df-43b4-a1b1-4d1d9cee23d1!
	I1227 20:59:33.696646       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc82333b-f666-454a-923f-92228b1762ed", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-542467_8ebc9a03-a8df-43b4-a1b1-4d1d9cee23d1 became leader
	W1227 20:59:33.703696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:59:33.707274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:59:33.796261       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-542467_8ebc9a03-a8df-43b4-a1b1-4d1d9cee23d1!
	W1227 20:59:35.710925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:59:35.718101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-542467 -n no-preload-542467
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-542467 -n no-preload-542467: exit status 2 (379.333237ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-542467 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-542467
helpers_test.go:244: (dbg) docker inspect no-preload-542467:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379",
	        "Created": "2025-12-27T20:57:05.049440772Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 515777,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:58:29.721544865Z",
	            "FinishedAt": "2025-12-27T20:58:28.642739322Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379/hosts",
	        "LogPath": "/var/lib/docker/containers/dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379/dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379-json.log",
	        "Name": "/no-preload-542467",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-542467:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-542467",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dd7872488d6d42d5f37285938726aa6ef58b390c3cf12a82967c0d0945a69379",
	                "LowerDir": "/var/lib/docker/overlay2/d95d5d7678527e5e563d3be9d484c3b526882d1f0fcfc24ff6bfe885fd5f4f8f-init/diff:/var/lib/docker/overlay2/637b50446ca6fdd1c68ecc122017c6d056564f2d823c643662c4eb8789019c20/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d95d5d7678527e5e563d3be9d484c3b526882d1f0fcfc24ff6bfe885fd5f4f8f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d95d5d7678527e5e563d3be9d484c3b526882d1f0fcfc24ff6bfe885fd5f4f8f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d95d5d7678527e5e563d3be9d484c3b526882d1f0fcfc24ff6bfe885fd5f4f8f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-542467",
	                "Source": "/var/lib/docker/volumes/no-preload-542467/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-542467",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-542467",
	                "name.minikube.sigs.k8s.io": "no-preload-542467",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d51b7576ee3e2c454a39a423ea03d9b8e54acef384828538ad39e69d035b99bc",
	            "SandboxKey": "/var/run/docker/netns/d51b7576ee3e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-542467": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:8e:51:df:01:51",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c1ebbbafc12790a4f974a3988a1224bfe471b8982037ddfef20526083d80bfe8",
	                    "EndpointID": "1be02c98d1d0727f438100e0f6de5dee51ba3c4fa16e5d7433e75d9addcef82a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-542467",
	                        "dd7872488d6d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-542467 -n no-preload-542467
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-542467 -n no-preload-542467: exit status 2 (350.591278ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-542467 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-542467 logs -n 25: (1.366255286s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ ssh     │ force-systemd-flag-604544 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-604544         │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ delete  │ -p force-systemd-flag-604544                                                                                                                                                                                                                  │ force-systemd-flag-604544         │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:57 UTC │
	│ start   │ -p newest-cni-549946 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-549946                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:57 UTC │ 27 Dec 25 20:58 UTC │
	│ addons  │ enable metrics-server -p newest-cni-549946 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-549946                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	│ stop    │ -p newest-cni-549946 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-549946                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ addons  │ enable dashboard -p newest-cni-549946 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-549946                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ start   │ -p newest-cni-549946 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-549946                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ addons  │ enable metrics-server -p no-preload-542467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-542467                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	│ stop    │ -p no-preload-542467 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-542467                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ image   │ newest-cni-549946 image list --format=json                                                                                                                                                                                                    │ newest-cni-549946                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ pause   │ -p newest-cni-549946 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-549946                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	│ delete  │ -p newest-cni-549946                                                                                                                                                                                                                          │ newest-cni-549946                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ delete  │ -p newest-cni-549946                                                                                                                                                                                                                          │ newest-cni-549946                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ start   │ -p test-preload-dl-gcs-038558 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-038558        │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-542467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-542467                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ start   │ -p no-preload-542467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-542467                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:59 UTC │
	│ delete  │ -p test-preload-dl-gcs-038558                                                                                                                                                                                                                 │ test-preload-dl-gcs-038558        │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ start   │ -p test-preload-dl-github-371459 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-371459     │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	│ delete  │ -p test-preload-dl-github-371459                                                                                                                                                                                                              │ test-preload-dl-github-371459     │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-876907 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-876907 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-876907                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-876907 │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:58 UTC │
	│ start   │ -p auto-037975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-037975                       │ jenkins │ v1.37.0 │ 27 Dec 25 20:58 UTC │ 27 Dec 25 20:59 UTC │
	│ ssh     │ -p auto-037975 pgrep -a kubelet                                                                                                                                                                                                               │ auto-037975                       │ jenkins │ v1.37.0 │ 27 Dec 25 20:59 UTC │ 27 Dec 25 20:59 UTC │
	│ image   │ no-preload-542467 image list --format=json                                                                                                                                                                                                    │ no-preload-542467                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:59 UTC │ 27 Dec 25 20:59 UTC │
	│ pause   │ -p no-preload-542467 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-542467                 │ jenkins │ v1.37.0 │ 27 Dec 25 20:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:58:41
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:58:41.138140  517451 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:58:41.138248  517451 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:58:41.138286  517451 out.go:374] Setting ErrFile to fd 2...
	I1227 20:58:41.138293  517451 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:58:41.138542  517451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:58:41.138959  517451 out.go:368] Setting JSON to false
	I1227 20:58:41.139799  517451 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9674,"bootTime":1766859448,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:58:41.139860  517451 start.go:143] virtualization:  
	I1227 20:58:41.144577  517451 out.go:179] * [auto-037975] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:58:41.147604  517451 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:58:41.147675  517451 notify.go:221] Checking for updates...
	I1227 20:58:41.156376  517451 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:58:41.159362  517451 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:58:41.162292  517451 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:58:41.165061  517451 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:58:41.167938  517451 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:58:41.171371  517451 config.go:182] Loaded profile config "no-preload-542467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:58:41.171502  517451 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:58:41.207457  517451 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:58:41.207563  517451 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:58:41.299035  517451 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 20:58:41.287721234 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:58:41.299141  517451 docker.go:319] overlay module found
	I1227 20:58:41.305529  517451 out.go:179] * Using the docker driver based on user configuration
	I1227 20:58:41.308396  517451 start.go:309] selected driver: docker
	I1227 20:58:41.308416  517451 start.go:928] validating driver "docker" against <nil>
	I1227 20:58:41.308429  517451 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:58:41.309126  517451 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:58:41.379427  517451 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 20:58:41.370451524 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:58:41.379564  517451 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 20:58:41.379780  517451 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:58:41.382745  517451 out.go:179] * Using Docker driver with root privileges
	I1227 20:58:41.385673  517451 cni.go:84] Creating CNI manager for ""
	I1227 20:58:41.385739  517451 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:58:41.385753  517451 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 20:58:41.385835  517451 start.go:353] cluster config:
	{Name:auto-037975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-037975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s Rosetta:false}
	I1227 20:58:41.389027  517451 out.go:179] * Starting "auto-037975" primary control-plane node in "auto-037975" cluster
	I1227 20:58:41.391898  517451 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 20:58:41.394782  517451 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:58:41.397614  517451 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:58:41.397657  517451 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 20:58:41.397667  517451 cache.go:65] Caching tarball of preloaded images
	I1227 20:58:41.397683  517451 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:58:41.397747  517451 preload.go:251] Found /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 20:58:41.397756  517451 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 20:58:41.397874  517451 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/config.json ...
	I1227 20:58:41.397891  517451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/config.json: {Name:mk80aae67ee487fd1e849ea2310bba72ea3a5bd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:41.418370  517451 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:58:41.418450  517451 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:58:41.418504  517451 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:58:41.418537  517451 start.go:360] acquireMachinesLock for auto-037975: {Name:mkbb5944f1db4111ae7674aa61f644093ca0cc2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:58:41.418756  517451 start.go:364] duration metric: took 139.205µs to acquireMachinesLock for "auto-037975"
	I1227 20:58:41.418791  517451 start.go:93] Provisioning new machine with config: &{Name:auto-037975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-037975 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:58:41.418855  517451 start.go:125] createHost starting for "" (driver="docker")
	I1227 20:58:39.387256  515650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:58:39.411081  515650 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:58:39.411154  515650 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:58:39.461458  515650 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:58:39.461479  515650 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:58:39.498178  515650 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:58:39.498197  515650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:58:39.536864  515650 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:58:39.536884  515650 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:58:39.587226  515650 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:58:39.587304  515650 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:58:39.604959  515650 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:58:39.605040  515650 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:58:39.622977  515650 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:58:39.623053  515650 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:58:39.643283  515650 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:58:39.643354  515650 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:58:39.668538  515650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:58:43.252867  515650 node_ready.go:49] node "no-preload-542467" is "Ready"
	I1227 20:58:43.252898  515650 node_ready.go:38] duration metric: took 3.91011649s for node "no-preload-542467" to be "Ready" ...
	I1227 20:58:43.252912  515650 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:58:43.252970  515650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:58:46.045556  515650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.7997778s)
	I1227 20:58:46.045621  515650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.658297836s)
	I1227 20:58:46.098294  515650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.429663254s)
	I1227 20:58:46.098452  515650 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.845464571s)
	I1227 20:58:46.098478  515650 api_server.go:72] duration metric: took 7.26882894s to wait for apiserver process to appear ...
	I1227 20:58:46.098505  515650 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:58:46.098535  515650 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 20:58:46.108534  515650 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 20:58:46.110285  515650 api_server.go:141] control plane version: v1.35.0
	I1227 20:58:46.110316  515650 api_server.go:131] duration metric: took 11.803151ms to wait for apiserver health ...
	I1227 20:58:46.110325  515650 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:58:46.115185  515650 system_pods.go:59] 8 kube-system pods found
	I1227 20:58:46.115257  515650 system_pods.go:61] "coredns-7d764666f9-p7xs9" [b5728b9d-d5dd-4946-971a-543ccae4bbb5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:58:46.115277  515650 system_pods.go:61] "etcd-no-preload-542467" [b2fc9fb5-3a79-4162-afa1-f4132b555027] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:58:46.115289  515650 system_pods.go:61] "kindnet-2v4p8" [9c2c77c3-7d5e-45f4-8eea-f6928cf134f5] Running
	I1227 20:58:46.115300  515650 system_pods.go:61] "kube-apiserver-no-preload-542467" [fb1f5660-5f3a-42ef-b2e5-4ba7758fcf27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:58:46.115319  515650 system_pods.go:61] "kube-controller-manager-no-preload-542467" [0e3c13f8-db03-45b4-a52e-b77e3414cdbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:58:46.115326  515650 system_pods.go:61] "kube-proxy-7mx96" [8d494c52-2b6e-431e-a66c-7f1e3f28a070] Running
	I1227 20:58:46.115338  515650 system_pods.go:61] "kube-scheduler-no-preload-542467" [576aa9a8-b4fa-4e56-a9a5-b438a60e0e18] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:58:46.115342  515650 system_pods.go:61] "storage-provisioner" [20b095bb-fb60-4860-ae08-c05d950bd9ea] Running
	I1227 20:58:46.115350  515650 system_pods.go:74] duration metric: took 5.018383ms to wait for pod list to return data ...
	I1227 20:58:46.115362  515650 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:58:46.122654  515650 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-542467 addons enable metrics-server
	
	I1227 20:58:41.422243  517451 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:58:41.422464  517451 start.go:159] libmachine.API.Create for "auto-037975" (driver="docker")
	I1227 20:58:41.422492  517451 client.go:173] LocalClient.Create starting
	I1227 20:58:41.422549  517451 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem
	I1227 20:58:41.422592  517451 main.go:144] libmachine: Decoding PEM data...
	I1227 20:58:41.422608  517451 main.go:144] libmachine: Parsing certificate...
	I1227 20:58:41.422664  517451 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem
	I1227 20:58:41.422682  517451 main.go:144] libmachine: Decoding PEM data...
	I1227 20:58:41.422693  517451 main.go:144] libmachine: Parsing certificate...
	I1227 20:58:41.423329  517451 cli_runner.go:164] Run: docker network inspect auto-037975 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:58:41.451891  517451 cli_runner.go:211] docker network inspect auto-037975 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:58:41.451969  517451 network_create.go:284] running [docker network inspect auto-037975] to gather additional debugging logs...
	I1227 20:58:41.451985  517451 cli_runner.go:164] Run: docker network inspect auto-037975
	W1227 20:58:41.471282  517451 cli_runner.go:211] docker network inspect auto-037975 returned with exit code 1
	I1227 20:58:41.471318  517451 network_create.go:287] error running [docker network inspect auto-037975]: docker network inspect auto-037975: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-037975 not found
	I1227 20:58:41.471331  517451 network_create.go:289] output of [docker network inspect auto-037975]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-037975 not found
	
	** /stderr **
	I1227 20:58:41.471435  517451 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:58:41.489211  517451 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9521cb9225c5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:1d:ef:38:b7:a6} reservation:<nil>}
	I1227 20:58:41.489627  517451 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-68d11cc2ab47 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:8d:ad:37:cb:fe} reservation:<nil>}
	I1227 20:58:41.489856  517451 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d3b7cfff4895 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:4a:e3:08:10:2f} reservation:<nil>}
	I1227 20:58:41.490132  517451 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c1ebbbafc127 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7a:a5:c1:28:f3:5c} reservation:<nil>}
	I1227 20:58:41.490539  517451 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d5410}
	I1227 20:58:41.490556  517451 network_create.go:124] attempt to create docker network auto-037975 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 20:58:41.490620  517451 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-037975 auto-037975
	I1227 20:58:41.556691  517451 network_create.go:108] docker network auto-037975 192.168.85.0/24 created
	I1227 20:58:41.556719  517451 kic.go:121] calculated static IP "192.168.85.2" for the "auto-037975" container
	I1227 20:58:41.556802  517451 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:58:41.577425  517451 cli_runner.go:164] Run: docker volume create auto-037975 --label name.minikube.sigs.k8s.io=auto-037975 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:58:41.600392  517451 oci.go:103] Successfully created a docker volume auto-037975
	I1227 20:58:41.600488  517451 cli_runner.go:164] Run: docker run --rm --name auto-037975-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-037975 --entrypoint /usr/bin/test -v auto-037975:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:58:42.572664  517451 oci.go:107] Successfully prepared a docker volume auto-037975
	I1227 20:58:42.572726  517451 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:58:42.572736  517451 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 20:58:42.572799  517451 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-037975:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 20:58:46.123031  515650 default_sa.go:45] found service account: "default"
	I1227 20:58:46.123055  515650 default_sa.go:55] duration metric: took 7.687283ms for default service account to be created ...
	I1227 20:58:46.123066  515650 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:58:46.168986  515650 system_pods.go:86] 8 kube-system pods found
	I1227 20:58:46.169015  515650 system_pods.go:89] "coredns-7d764666f9-p7xs9" [b5728b9d-d5dd-4946-971a-543ccae4bbb5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:58:46.169027  515650 system_pods.go:89] "etcd-no-preload-542467" [b2fc9fb5-3a79-4162-afa1-f4132b555027] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:58:46.169034  515650 system_pods.go:89] "kindnet-2v4p8" [9c2c77c3-7d5e-45f4-8eea-f6928cf134f5] Running
	I1227 20:58:46.169041  515650 system_pods.go:89] "kube-apiserver-no-preload-542467" [fb1f5660-5f3a-42ef-b2e5-4ba7758fcf27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:58:46.169048  515650 system_pods.go:89] "kube-controller-manager-no-preload-542467" [0e3c13f8-db03-45b4-a52e-b77e3414cdbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:58:46.169053  515650 system_pods.go:89] "kube-proxy-7mx96" [8d494c52-2b6e-431e-a66c-7f1e3f28a070] Running
	I1227 20:58:46.169060  515650 system_pods.go:89] "kube-scheduler-no-preload-542467" [576aa9a8-b4fa-4e56-a9a5-b438a60e0e18] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:58:46.169064  515650 system_pods.go:89] "storage-provisioner" [20b095bb-fb60-4860-ae08-c05d950bd9ea] Running
	I1227 20:58:46.169072  515650 system_pods.go:126] duration metric: took 45.999612ms to wait for k8s-apps to be running ...
	I1227 20:58:46.169079  515650 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:58:46.169131  515650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:58:46.188213  515650 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1227 20:58:46.210760  515650 system_svc.go:56] duration metric: took 41.671057ms WaitForService to wait for kubelet
	I1227 20:58:46.215833  515650 kubeadm.go:587] duration metric: took 7.386173102s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:58:46.215872  515650 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:58:46.251280  515650 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:58:46.251362  515650 node_conditions.go:123] node cpu capacity is 2
	I1227 20:58:46.251390  515650 node_conditions.go:105] duration metric: took 35.511232ms to run NodePressure ...
	I1227 20:58:46.251434  515650 start.go:242] waiting for startup goroutines ...
	I1227 20:58:46.252710  515650 addons.go:530] duration metric: took 7.422589541s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1227 20:58:46.252745  515650 start.go:247] waiting for cluster config update ...
	I1227 20:58:46.252759  515650 start.go:256] writing updated cluster config ...
	I1227 20:58:46.254988  515650 ssh_runner.go:195] Run: rm -f paused
	I1227 20:58:46.259271  515650 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:58:46.262783  515650 pod_ready.go:83] waiting for pod "coredns-7d764666f9-p7xs9" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 20:58:48.272377  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	I1227 20:58:46.632186  517451 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-037975:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.059343902s)
	I1227 20:58:46.632223  517451 kic.go:203] duration metric: took 4.059483458s to extract preloaded images to volume ...
	W1227 20:58:46.632364  517451 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 20:58:46.632473  517451 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:58:46.688533  517451 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-037975 --name auto-037975 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-037975 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-037975 --network auto-037975 --ip 192.168.85.2 --volume auto-037975:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:58:47.002848  517451 cli_runner.go:164] Run: docker container inspect auto-037975 --format={{.State.Running}}
	I1227 20:58:47.025952  517451 cli_runner.go:164] Run: docker container inspect auto-037975 --format={{.State.Status}}
	I1227 20:58:47.044308  517451 cli_runner.go:164] Run: docker exec auto-037975 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:58:47.113163  517451 oci.go:144] the created container "auto-037975" has a running status.
	I1227 20:58:47.113190  517451 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/auto-037975/id_rsa...
	I1227 20:58:47.277926  517451 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-272475/.minikube/machines/auto-037975/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:58:47.311228  517451 cli_runner.go:164] Run: docker container inspect auto-037975 --format={{.State.Status}}
	I1227 20:58:47.339220  517451 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:58:47.339239  517451 kic_runner.go:114] Args: [docker exec --privileged auto-037975 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:58:47.391901  517451 cli_runner.go:164] Run: docker container inspect auto-037975 --format={{.State.Status}}
	I1227 20:58:47.418389  517451 machine.go:94] provisionDockerMachine start ...
	I1227 20:58:47.418489  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:58:47.445773  517451 main.go:144] libmachine: Using SSH client type: native
	I1227 20:58:47.446232  517451 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1227 20:58:47.446246  517451 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:58:47.447836  517451 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46590->127.0.0.1:33458: read: connection reset by peer
	I1227 20:58:50.622986  517451 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-037975
	
	I1227 20:58:50.623069  517451 ubuntu.go:182] provisioning hostname "auto-037975"
	I1227 20:58:50.623170  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:58:50.653129  517451 main.go:144] libmachine: Using SSH client type: native
	I1227 20:58:50.653432  517451 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1227 20:58:50.653528  517451 main.go:144] libmachine: About to run SSH command:
	sudo hostname auto-037975 && echo "auto-037975" | sudo tee /etc/hostname
	I1227 20:58:50.822167  517451 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-037975
	
	I1227 20:58:50.822319  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:58:50.848011  517451 main.go:144] libmachine: Using SSH client type: native
	I1227 20:58:50.848437  517451 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1227 20:58:50.848462  517451 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-037975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-037975/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-037975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:58:51.003876  517451 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:58:51.003909  517451 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-272475/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-272475/.minikube}
	I1227 20:58:51.003967  517451 ubuntu.go:190] setting up certificates
	I1227 20:58:51.003976  517451 provision.go:84] configureAuth start
	I1227 20:58:51.004039  517451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-037975
	I1227 20:58:51.031722  517451 provision.go:143] copyHostCerts
	I1227 20:58:51.031792  517451 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem, removing ...
	I1227 20:58:51.031806  517451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem
	I1227 20:58:51.031908  517451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/ca.pem (1078 bytes)
	I1227 20:58:51.032029  517451 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem, removing ...
	I1227 20:58:51.032041  517451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem
	I1227 20:58:51.032074  517451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/cert.pem (1123 bytes)
	I1227 20:58:51.032173  517451 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem, removing ...
	I1227 20:58:51.032203  517451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem
	I1227 20:58:51.032239  517451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-272475/.minikube/key.pem (1675 bytes)
	I1227 20:58:51.032318  517451 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem org=jenkins.auto-037975 san=[127.0.0.1 192.168.85.2 auto-037975 localhost minikube]
	I1227 20:58:51.276594  517451 provision.go:177] copyRemoteCerts
	I1227 20:58:51.276659  517451 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:58:51.276706  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:58:51.300817  517451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/auto-037975/id_rsa Username:docker}
	I1227 20:58:51.406718  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 20:58:51.427931  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1227 20:58:51.448503  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:58:51.468309  517451 provision.go:87] duration metric: took 464.310953ms to configureAuth
	I1227 20:58:51.468387  517451 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:58:51.468615  517451 config.go:182] Loaded profile config "auto-037975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:58:51.468785  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:58:51.488499  517451 main.go:144] libmachine: Using SSH client type: native
	I1227 20:58:51.488810  517451 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1227 20:58:51.488824  517451 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 20:58:51.973288  517451 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 20:58:51.973312  517451 machine.go:97] duration metric: took 4.554903221s to provisionDockerMachine
	I1227 20:58:51.973323  517451 client.go:176] duration metric: took 10.550825068s to LocalClient.Create
	I1227 20:58:51.973337  517451 start.go:167] duration metric: took 10.550873887s to libmachine.API.Create "auto-037975"
	I1227 20:58:51.973344  517451 start.go:293] postStartSetup for "auto-037975" (driver="docker")
	I1227 20:58:51.973355  517451 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:58:51.973416  517451 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:58:51.973507  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:58:51.993267  517451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/auto-037975/id_rsa Username:docker}
	I1227 20:58:52.108155  517451 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:58:52.111864  517451 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:58:52.111889  517451 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:58:52.111900  517451 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/addons for local assets ...
	I1227 20:58:52.111952  517451 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-272475/.minikube/files for local assets ...
	I1227 20:58:52.112025  517451 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem -> 2743362.pem in /etc/ssl/certs
	I1227 20:58:52.112127  517451 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:58:52.120638  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:58:52.144037  517451 start.go:296] duration metric: took 170.677888ms for postStartSetup
	I1227 20:58:52.144395  517451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-037975
	I1227 20:58:52.169518  517451 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/config.json ...
	I1227 20:58:52.169787  517451 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:58:52.169826  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:58:52.193294  517451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/auto-037975/id_rsa Username:docker}
	I1227 20:58:52.299528  517451 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:58:52.304288  517451 start.go:128] duration metric: took 10.885418254s to createHost
	I1227 20:58:52.304310  517451 start.go:83] releasing machines lock for "auto-037975", held for 10.885540713s
	I1227 20:58:52.304375  517451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-037975
	I1227 20:58:52.327429  517451 ssh_runner.go:195] Run: cat /version.json
	I1227 20:58:52.327481  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:58:52.327706  517451 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:58:52.327760  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:58:52.361819  517451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/auto-037975/id_rsa Username:docker}
	I1227 20:58:52.362163  517451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/auto-037975/id_rsa Username:docker}
	I1227 20:58:52.595615  517451 ssh_runner.go:195] Run: systemctl --version
	I1227 20:58:52.603346  517451 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 20:58:52.659135  517451 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:58:52.664305  517451 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:58:52.664387  517451 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:58:52.697078  517451 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 20:58:52.697105  517451 start.go:496] detecting cgroup driver to use...
	I1227 20:58:52.697137  517451 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:58:52.697187  517451 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 20:58:52.717701  517451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 20:58:52.730434  517451 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:58:52.730557  517451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:58:52.748783  517451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:58:52.769316  517451 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:58:52.896765  517451 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:58:53.069985  517451 docker.go:234] disabling docker service ...
	I1227 20:58:53.070052  517451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:58:53.096159  517451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:58:53.110730  517451 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:58:53.314278  517451 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:58:53.493565  517451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:58:53.508496  517451 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:58:53.523940  517451 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 20:58:53.524027  517451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:53.533106  517451 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 20:58:53.533197  517451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:53.542304  517451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:53.551035  517451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:53.559766  517451 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:58:53.573842  517451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:53.582733  517451 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:53.596754  517451 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 20:58:53.605913  517451 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:58:53.616137  517451 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:58:53.624191  517451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:58:53.773319  517451 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 20:58:54.047188  517451 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 20:58:54.047339  517451 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 20:58:54.055966  517451 start.go:574] Will wait 60s for crictl version
	I1227 20:58:54.056085  517451 ssh_runner.go:195] Run: which crictl
	I1227 20:58:54.060811  517451 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:58:54.102754  517451 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 20:58:54.102913  517451 ssh_runner.go:195] Run: crio --version
	I1227 20:58:54.141243  517451 ssh_runner.go:195] Run: crio --version
	I1227 20:58:54.204261  517451 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	W1227 20:58:50.768138  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	W1227 20:58:52.770965  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	I1227 20:58:54.207360  517451 cli_runner.go:164] Run: docker network inspect auto-037975 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:58:54.229829  517451 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 20:58:54.234321  517451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:58:54.247893  517451 kubeadm.go:884] updating cluster {Name:auto-037975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-037975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:58:54.248005  517451 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 20:58:54.248058  517451 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:58:54.315599  517451 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:58:54.315619  517451 crio.go:433] Images already preloaded, skipping extraction
	I1227 20:58:54.315673  517451 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:58:54.359384  517451 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 20:58:54.359430  517451 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:58:54.359444  517451 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1227 20:58:54.359532  517451 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-037975 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:auto-037975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:58:54.359614  517451 ssh_runner.go:195] Run: crio config
	I1227 20:58:54.463486  517451 cni.go:84] Creating CNI manager for ""
	I1227 20:58:54.463556  517451 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:58:54.463586  517451 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:58:54.463646  517451 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-037975 NodeName:auto-037975 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:58:54.463827  517451 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-037975"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:58:54.463933  517451 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:58:54.473152  517451 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:58:54.473268  517451 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:58:54.480842  517451 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1227 20:58:54.496298  517451 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:58:54.509431  517451 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1227 20:58:54.522995  517451 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:58:54.527306  517451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:58:54.536953  517451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:58:54.694625  517451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:58:54.712372  517451 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975 for IP: 192.168.85.2
	I1227 20:58:54.712443  517451 certs.go:195] generating shared ca certs ...
	I1227 20:58:54.712473  517451 certs.go:227] acquiring lock for ca certs: {Name:mk091e9bc67fb705ebc6a94fa171c8589e848cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:54.712647  517451 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key
	I1227 20:58:54.712728  517451 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key
	I1227 20:58:54.712767  517451 certs.go:257] generating profile certs ...
	I1227 20:58:54.712857  517451 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.key
	I1227 20:58:54.712899  517451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.crt with IP's: []
	I1227 20:58:55.013077  517451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.crt ...
	I1227 20:58:55.013118  517451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.crt: {Name:mkf046e2079b7fa075d1cd71697496ffbf7320ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:55.013396  517451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.key ...
	I1227 20:58:55.013417  517451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.key: {Name:mka7aa226a768ec29ad6e29bb6acc414ea10d550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:55.013635  517451 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.key.36204d99
	I1227 20:58:55.013676  517451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.crt.36204d99 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 20:58:55.419503  517451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.crt.36204d99 ...
	I1227 20:58:55.419535  517451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.crt.36204d99: {Name:mk777375fe379f76f324a2aeed9e5dc766756a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:55.419780  517451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.key.36204d99 ...
	I1227 20:58:55.419797  517451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.key.36204d99: {Name:mk27bcf8bc7f5d60a1f156b99ce7c02840ccfcff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:55.419921  517451 certs.go:382] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.crt.36204d99 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.crt
	I1227 20:58:55.420042  517451 certs.go:386] copying /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.key.36204d99 -> /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.key
	I1227 20:58:55.420127  517451 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/proxy-client.key
	I1227 20:58:55.420161  517451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/proxy-client.crt with IP's: []
	I1227 20:58:55.529731  517451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/proxy-client.crt ...
	I1227 20:58:55.529762  517451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/proxy-client.crt: {Name:mkc7aff75cc28455ac101bb11f055ca2ac7a54a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:55.529988  517451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/proxy-client.key ...
	I1227 20:58:55.530006  517451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/proxy-client.key: {Name:mkda095a66ed7eb88c3f28148a58d32bba3e9afa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:58:55.530241  517451 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem (1338 bytes)
	W1227 20:58:55.530303  517451 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336_empty.pem, impossibly tiny 0 bytes
	I1227 20:58:55.530319  517451 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:58:55.530362  517451 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/ca.pem (1078 bytes)
	I1227 20:58:55.530410  517451 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:58:55.530445  517451 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/certs/key.pem (1675 bytes)
	I1227 20:58:55.530511  517451 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem (1708 bytes)
	I1227 20:58:55.531109  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:58:55.551489  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:58:55.574961  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:58:55.597080  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 20:58:55.623581  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1227 20:58:55.646523  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:58:55.668899  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:58:55.692154  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:58:55.727072  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/ssl/certs/2743362.pem --> /usr/share/ca-certificates/2743362.pem (1708 bytes)
	I1227 20:58:55.770152  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:58:55.801474  517451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-272475/.minikube/certs/274336.pem --> /usr/share/ca-certificates/274336.pem (1338 bytes)
	I1227 20:58:55.836272  517451 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:58:55.867571  517451 ssh_runner.go:195] Run: openssl version
	I1227 20:58:55.877271  517451 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2743362.pem
	I1227 20:58:55.885421  517451 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2743362.pem /etc/ssl/certs/2743362.pem
	I1227 20:58:55.893648  517451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2743362.pem
	I1227 20:58:55.898738  517451 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:00 /usr/share/ca-certificates/2743362.pem
	I1227 20:58:55.898842  517451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2743362.pem
	I1227 20:58:55.944545  517451 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:58:55.951999  517451 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2743362.pem /etc/ssl/certs/3ec20f2e.0
	I1227 20:58:55.959356  517451 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:58:55.966504  517451 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:58:55.981995  517451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:58:55.988045  517451 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:58:55.988146  517451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:58:56.047379  517451 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:58:56.059974  517451 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 20:58:56.072972  517451 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/274336.pem
	I1227 20:58:56.082852  517451 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/274336.pem /etc/ssl/certs/274336.pem
	I1227 20:58:56.097178  517451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274336.pem
	I1227 20:58:56.101877  517451 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:00 /usr/share/ca-certificates/274336.pem
	I1227 20:58:56.101949  517451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274336.pem
	I1227 20:58:56.145582  517451 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:58:56.154691  517451 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/274336.pem /etc/ssl/certs/51391683.0
	I1227 20:58:56.162030  517451 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:58:56.166658  517451 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:58:56.166742  517451 kubeadm.go:401] StartCluster: {Name:auto-037975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-037975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:58:56.166848  517451 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 20:58:56.166911  517451 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:58:56.199556  517451 cri.go:96] found id: ""
	I1227 20:58:56.199675  517451 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:58:56.210340  517451 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 20:58:56.218082  517451 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:58:56.218172  517451 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:58:56.228484  517451 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:58:56.228506  517451 kubeadm.go:158] found existing configuration files:
	
	I1227 20:58:56.228584  517451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:58:56.237276  517451 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:58:56.237364  517451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:58:56.245116  517451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:58:56.254018  517451 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:58:56.254122  517451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:58:56.261657  517451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:58:56.275090  517451 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:58:56.275176  517451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:58:56.282975  517451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:58:56.291016  517451 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:58:56.291108  517451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:58:56.298702  517451 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:58:56.354993  517451 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:58:56.355417  517451 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:58:56.509324  517451 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:58:56.509469  517451 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 20:58:56.509535  517451 kubeadm.go:319] OS: Linux
	I1227 20:58:56.509608  517451 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:58:56.509687  517451 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 20:58:56.509758  517451 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:58:56.509836  517451 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:58:56.509910  517451 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:58:56.509989  517451 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:58:56.510059  517451 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:58:56.510134  517451 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:58:56.510203  517451 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 20:58:56.644391  517451 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:58:56.644562  517451 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:58:56.644707  517451 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:58:56.665862  517451 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1227 20:58:54.785251  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	W1227 20:58:57.268840  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	W1227 20:58:59.270919  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	I1227 20:58:56.672844  517451 out.go:252]   - Generating certificates and keys ...
	I1227 20:58:56.672973  517451 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:58:56.673097  517451 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:58:56.846204  517451 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 20:58:57.084661  517451 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 20:58:57.607441  517451 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 20:58:57.802796  517451 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 20:58:58.059688  517451 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 20:58:58.060315  517451 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-037975 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 20:58:58.583594  517451 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 20:58:58.584120  517451 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-037975 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 20:58:58.851092  517451 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 20:58:58.956217  517451 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 20:58:59.124104  517451 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 20:58:59.124443  517451 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:58:59.271366  517451 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:58:59.437822  517451 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:58:59.721183  517451 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:58:59.934928  517451 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:59:00.329319  517451 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:59:00.333733  517451 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:59:00.344081  517451 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:59:00.347706  517451 out.go:252]   - Booting up control plane ...
	I1227 20:59:00.347835  517451 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:59:00.347920  517451 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:59:00.349850  517451 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:59:00.369399  517451 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:59:00.369559  517451 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:59:00.381259  517451 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:59:00.381835  517451 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:59:00.381894  517451 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:59:00.568137  517451 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:59:00.568875  517451 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1227 20:59:01.769103  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	W1227 20:59:04.276706  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	I1227 20:59:01.571691  517451 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002744451s
	I1227 20:59:01.576012  517451 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 20:59:01.576160  517451 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1227 20:59:01.576286  517451 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 20:59:01.576397  517451 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 20:59:02.587384  517451 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.010163995s
	I1227 20:59:04.494389  517451 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.91841069s
	I1227 20:59:06.079271  517451 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502354568s
	I1227 20:59:06.119032  517451 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 20:59:06.145851  517451 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 20:59:06.160703  517451 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 20:59:06.161571  517451 kubeadm.go:319] [mark-control-plane] Marking the node auto-037975 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 20:59:06.179263  517451 kubeadm.go:319] [bootstrap-token] Using token: sslnvq.tedoe2sw894igal5
	I1227 20:59:06.182151  517451 out.go:252]   - Configuring RBAC rules ...
	I1227 20:59:06.182279  517451 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 20:59:06.191762  517451 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 20:59:06.203143  517451 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 20:59:06.207417  517451 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 20:59:06.211890  517451 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 20:59:06.218037  517451 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 20:59:06.489204  517451 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 20:59:06.924768  517451 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 20:59:07.490375  517451 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 20:59:07.491752  517451 kubeadm.go:319] 
	I1227 20:59:07.491831  517451 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 20:59:07.491843  517451 kubeadm.go:319] 
	I1227 20:59:07.491933  517451 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 20:59:07.491943  517451 kubeadm.go:319] 
	I1227 20:59:07.491969  517451 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 20:59:07.492030  517451 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 20:59:07.492108  517451 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 20:59:07.492122  517451 kubeadm.go:319] 
	I1227 20:59:07.492178  517451 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 20:59:07.492182  517451 kubeadm.go:319] 
	I1227 20:59:07.492229  517451 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 20:59:07.492233  517451 kubeadm.go:319] 
	I1227 20:59:07.492285  517451 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 20:59:07.492360  517451 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 20:59:07.492429  517451 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 20:59:07.492433  517451 kubeadm.go:319] 
	I1227 20:59:07.492517  517451 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 20:59:07.492595  517451 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 20:59:07.492598  517451 kubeadm.go:319] 
	I1227 20:59:07.492682  517451 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token sslnvq.tedoe2sw894igal5 \
	I1227 20:59:07.492799  517451 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ff29328d1e0d612c7979c16c69d6042f5f31e931d111cc12c8320ed4e4ab5152 \
	I1227 20:59:07.492821  517451 kubeadm.go:319] 	--control-plane 
	I1227 20:59:07.492825  517451 kubeadm.go:319] 
	I1227 20:59:07.492910  517451 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 20:59:07.492914  517451 kubeadm.go:319] 
	I1227 20:59:07.492996  517451 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token sslnvq.tedoe2sw894igal5 \
	I1227 20:59:07.493098  517451 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ff29328d1e0d612c7979c16c69d6042f5f31e931d111cc12c8320ed4e4ab5152 
	I1227 20:59:07.495538  517451 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 20:59:07.495953  517451 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:59:07.496066  517451 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:59:07.496086  517451 cni.go:84] Creating CNI manager for ""
	I1227 20:59:07.496094  517451 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 20:59:07.501054  517451 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1227 20:59:06.768105  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	W1227 20:59:09.269300  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	I1227 20:59:07.503871  517451 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 20:59:07.508152  517451 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 20:59:07.508176  517451 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 20:59:07.526195  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 20:59:08.293162  517451 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 20:59:08.293307  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:59:08.293384  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-037975 minikube.k8s.io/updated_at=2025_12_27T20_59_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562 minikube.k8s.io/name=auto-037975 minikube.k8s.io/primary=true
	I1227 20:59:08.444823  517451 ops.go:34] apiserver oom_adj: -16
	I1227 20:59:08.444945  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:59:08.945595  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:59:09.445743  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:59:09.945217  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:59:10.445088  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:59:10.945112  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:59:11.445599  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:59:11.945654  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:59:12.445485  517451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 20:59:12.596902  517451 kubeadm.go:1114] duration metric: took 4.303656264s to wait for elevateKubeSystemPrivileges
	I1227 20:59:12.596951  517451 kubeadm.go:403] duration metric: took 16.430208593s to StartCluster
	I1227 20:59:12.596969  517451 settings.go:142] acquiring lock: {Name:mk751cea4adf2a56019de1806f27726c329ff1e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:59:12.597042  517451 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:59:12.598152  517451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/kubeconfig: {Name:mk33f3bb22ba9b0007edb185e26ea639320fa2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:59:12.598394  517451 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 20:59:12.598530  517451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 20:59:12.598842  517451 config.go:182] Loaded profile config "auto-037975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:59:12.598891  517451 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:59:12.598964  517451 addons.go:70] Setting storage-provisioner=true in profile "auto-037975"
	I1227 20:59:12.598990  517451 addons.go:239] Setting addon storage-provisioner=true in "auto-037975"
	I1227 20:59:12.599018  517451 host.go:66] Checking if "auto-037975" exists ...
	I1227 20:59:12.599894  517451 cli_runner.go:164] Run: docker container inspect auto-037975 --format={{.State.Status}}
	I1227 20:59:12.600299  517451 addons.go:70] Setting default-storageclass=true in profile "auto-037975"
	I1227 20:59:12.600334  517451 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-037975"
	I1227 20:59:12.600603  517451 cli_runner.go:164] Run: docker container inspect auto-037975 --format={{.State.Status}}
	I1227 20:59:12.602363  517451 out.go:179] * Verifying Kubernetes components...
	I1227 20:59:12.605924  517451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:59:12.643075  517451 addons.go:239] Setting addon default-storageclass=true in "auto-037975"
	I1227 20:59:12.643126  517451 host.go:66] Checking if "auto-037975" exists ...
	I1227 20:59:12.643527  517451 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:59:12.643753  517451 cli_runner.go:164] Run: docker container inspect auto-037975 --format={{.State.Status}}
	I1227 20:59:12.646639  517451 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:59:12.646660  517451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:59:12.646732  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:59:12.696221  517451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/auto-037975/id_rsa Username:docker}
	I1227 20:59:12.700733  517451 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:59:12.700755  517451 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:59:12.700814  517451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-037975
	I1227 20:59:12.729873  517451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/auto-037975/id_rsa Username:docker}
	I1227 20:59:12.980086  517451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 20:59:12.980237  517451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:59:13.007170  517451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:59:13.064792  517451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:59:13.755512  517451 node_ready.go:35] waiting up to 15m0s for node "auto-037975" to be "Ready" ...
	I1227 20:59:13.755933  517451 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1227 20:59:14.105416  517451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.098211465s)
	I1227 20:59:14.105513  517451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.040651974s)
	I1227 20:59:14.116493  517451 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1227 20:59:11.769789  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	W1227 20:59:13.777664  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	I1227 20:59:14.119543  517451 addons.go:530] duration metric: took 1.520649283s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 20:59:14.260273  517451 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-037975" context rescaled to 1 replicas
	W1227 20:59:15.758975  517451 node_ready.go:57] node "auto-037975" has "Ready":"False" status (will retry)
	W1227 20:59:16.272294  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	W1227 20:59:18.768047  515650 pod_ready.go:104] pod "coredns-7d764666f9-p7xs9" is not "Ready", error: <nil>
	W1227 20:59:18.258192  517451 node_ready.go:57] node "auto-037975" has "Ready":"False" status (will retry)
	W1227 20:59:20.261363  517451 node_ready.go:57] node "auto-037975" has "Ready":"False" status (will retry)
	I1227 20:59:19.767607  515650 pod_ready.go:94] pod "coredns-7d764666f9-p7xs9" is "Ready"
	I1227 20:59:19.767638  515650 pod_ready.go:86] duration metric: took 33.504826713s for pod "coredns-7d764666f9-p7xs9" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:19.770064  515650 pod_ready.go:83] waiting for pod "etcd-no-preload-542467" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:19.774154  515650 pod_ready.go:94] pod "etcd-no-preload-542467" is "Ready"
	I1227 20:59:19.774180  515650 pod_ready.go:86] duration metric: took 4.090261ms for pod "etcd-no-preload-542467" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:19.776171  515650 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-542467" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:19.780428  515650 pod_ready.go:94] pod "kube-apiserver-no-preload-542467" is "Ready"
	I1227 20:59:19.780455  515650 pod_ready.go:86] duration metric: took 4.259708ms for pod "kube-apiserver-no-preload-542467" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:19.782758  515650 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-542467" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:19.965939  515650 pod_ready.go:94] pod "kube-controller-manager-no-preload-542467" is "Ready"
	I1227 20:59:19.965971  515650 pod_ready.go:86] duration metric: took 183.192418ms for pod "kube-controller-manager-no-preload-542467" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:20.166274  515650 pod_ready.go:83] waiting for pod "kube-proxy-7mx96" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:20.566283  515650 pod_ready.go:94] pod "kube-proxy-7mx96" is "Ready"
	I1227 20:59:20.566310  515650 pod_ready.go:86] duration metric: took 400.002561ms for pod "kube-proxy-7mx96" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:20.766019  515650 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-542467" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:21.165532  515650 pod_ready.go:94] pod "kube-scheduler-no-preload-542467" is "Ready"
	I1227 20:59:21.165558  515650 pod_ready.go:86] duration metric: took 399.515592ms for pod "kube-scheduler-no-preload-542467" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:21.165570  515650 pod_ready.go:40] duration metric: took 34.906269536s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:59:21.219181  515650 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 20:59:21.222440  515650 out.go:203] 
	W1227 20:59:21.225360  515650 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 20:59:21.228332  515650 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:59:21.231534  515650 out.go:179] * Done! kubectl is now configured to use "no-preload-542467" cluster and "default" namespace by default
	W1227 20:59:22.759229  517451 node_ready.go:57] node "auto-037975" has "Ready":"False" status (will retry)
	W1227 20:59:25.259284  517451 node_ready.go:57] node "auto-037975" has "Ready":"False" status (will retry)
	I1227 20:59:26.278132  517451 node_ready.go:49] node "auto-037975" is "Ready"
	I1227 20:59:26.278159  517451 node_ready.go:38] duration metric: took 12.522564557s for node "auto-037975" to be "Ready" ...
	I1227 20:59:26.278173  517451 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:59:26.278228  517451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:59:26.297803  517451 api_server.go:72] duration metric: took 13.699373969s to wait for apiserver process to appear ...
	I1227 20:59:26.297825  517451 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:59:26.297844  517451 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 20:59:26.311651  517451 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 20:59:26.312872  517451 api_server.go:141] control plane version: v1.35.0
	I1227 20:59:26.312896  517451 api_server.go:131] duration metric: took 15.063559ms to wait for apiserver health ...
	I1227 20:59:26.312905  517451 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:59:26.316170  517451 system_pods.go:59] 8 kube-system pods found
	I1227 20:59:26.316207  517451 system_pods.go:61] "coredns-7d764666f9-rkj2k" [f6beeeae-d175-46f3-a73e-8023b7663622] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:59:26.316214  517451 system_pods.go:61] "etcd-auto-037975" [70f7cd8f-e106-4135-a7d9-58b42ee396b8] Running
	I1227 20:59:26.316222  517451 system_pods.go:61] "kindnet-ccqpq" [8f504a6b-6ab1-47e2-9117-ee5b272539df] Running
	I1227 20:59:26.316226  517451 system_pods.go:61] "kube-apiserver-auto-037975" [ccd46d26-4414-4fd0-a2d9-8b8041cf2efb] Running
	I1227 20:59:26.316230  517451 system_pods.go:61] "kube-controller-manager-auto-037975" [1a8e3761-f836-4208-862e-c95875274cb0] Running
	I1227 20:59:26.316235  517451 system_pods.go:61] "kube-proxy-jp6cb" [bb840391-13cb-4762-81be-b759ccff79e8] Running
	I1227 20:59:26.316242  517451 system_pods.go:61] "kube-scheduler-auto-037975" [785266d0-11b5-4efb-90c0-3acb68d15ef7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:59:26.316256  517451 system_pods.go:61] "storage-provisioner" [0926543f-2839-40a5-92c4-fb237a5834ae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:59:26.316273  517451 system_pods.go:74] duration metric: took 3.353493ms to wait for pod list to return data ...
	I1227 20:59:26.316281  517451 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:59:26.320163  517451 default_sa.go:45] found service account: "default"
	I1227 20:59:26.320182  517451 default_sa.go:55] duration metric: took 3.894828ms for default service account to be created ...
	I1227 20:59:26.320191  517451 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:59:26.323132  517451 system_pods.go:86] 8 kube-system pods found
	I1227 20:59:26.323204  517451 system_pods.go:89] "coredns-7d764666f9-rkj2k" [f6beeeae-d175-46f3-a73e-8023b7663622] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:59:26.323228  517451 system_pods.go:89] "etcd-auto-037975" [70f7cd8f-e106-4135-a7d9-58b42ee396b8] Running
	I1227 20:59:26.323266  517451 system_pods.go:89] "kindnet-ccqpq" [8f504a6b-6ab1-47e2-9117-ee5b272539df] Running
	I1227 20:59:26.323292  517451 system_pods.go:89] "kube-apiserver-auto-037975" [ccd46d26-4414-4fd0-a2d9-8b8041cf2efb] Running
	I1227 20:59:26.323315  517451 system_pods.go:89] "kube-controller-manager-auto-037975" [1a8e3761-f836-4208-862e-c95875274cb0] Running
	I1227 20:59:26.323339  517451 system_pods.go:89] "kube-proxy-jp6cb" [bb840391-13cb-4762-81be-b759ccff79e8] Running
	I1227 20:59:26.323374  517451 system_pods.go:89] "kube-scheduler-auto-037975" [785266d0-11b5-4efb-90c0-3acb68d15ef7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:59:26.323402  517451 system_pods.go:89] "storage-provisioner" [0926543f-2839-40a5-92c4-fb237a5834ae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:59:26.323455  517451 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1227 20:59:26.600244  517451 system_pods.go:86] 8 kube-system pods found
	I1227 20:59:26.600332  517451 system_pods.go:89] "coredns-7d764666f9-rkj2k" [f6beeeae-d175-46f3-a73e-8023b7663622] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:59:26.600377  517451 system_pods.go:89] "etcd-auto-037975" [70f7cd8f-e106-4135-a7d9-58b42ee396b8] Running
	I1227 20:59:26.600409  517451 system_pods.go:89] "kindnet-ccqpq" [8f504a6b-6ab1-47e2-9117-ee5b272539df] Running
	I1227 20:59:26.600432  517451 system_pods.go:89] "kube-apiserver-auto-037975" [ccd46d26-4414-4fd0-a2d9-8b8041cf2efb] Running
	I1227 20:59:26.600457  517451 system_pods.go:89] "kube-controller-manager-auto-037975" [1a8e3761-f836-4208-862e-c95875274cb0] Running
	I1227 20:59:26.600492  517451 system_pods.go:89] "kube-proxy-jp6cb" [bb840391-13cb-4762-81be-b759ccff79e8] Running
	I1227 20:59:26.600528  517451 system_pods.go:89] "kube-scheduler-auto-037975" [785266d0-11b5-4efb-90c0-3acb68d15ef7] Running
	I1227 20:59:26.600552  517451 system_pods.go:89] "storage-provisioner" [0926543f-2839-40a5-92c4-fb237a5834ae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:59:26.867675  517451 system_pods.go:86] 8 kube-system pods found
	I1227 20:59:26.867719  517451 system_pods.go:89] "coredns-7d764666f9-rkj2k" [f6beeeae-d175-46f3-a73e-8023b7663622] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:59:26.867726  517451 system_pods.go:89] "etcd-auto-037975" [70f7cd8f-e106-4135-a7d9-58b42ee396b8] Running
	I1227 20:59:26.867734  517451 system_pods.go:89] "kindnet-ccqpq" [8f504a6b-6ab1-47e2-9117-ee5b272539df] Running
	I1227 20:59:26.867738  517451 system_pods.go:89] "kube-apiserver-auto-037975" [ccd46d26-4414-4fd0-a2d9-8b8041cf2efb] Running
	I1227 20:59:26.867743  517451 system_pods.go:89] "kube-controller-manager-auto-037975" [1a8e3761-f836-4208-862e-c95875274cb0] Running
	I1227 20:59:26.867748  517451 system_pods.go:89] "kube-proxy-jp6cb" [bb840391-13cb-4762-81be-b759ccff79e8] Running
	I1227 20:59:26.867752  517451 system_pods.go:89] "kube-scheduler-auto-037975" [785266d0-11b5-4efb-90c0-3acb68d15ef7] Running
	I1227 20:59:26.867758  517451 system_pods.go:89] "storage-provisioner" [0926543f-2839-40a5-92c4-fb237a5834ae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 20:59:27.341359  517451 system_pods.go:86] 8 kube-system pods found
	I1227 20:59:27.341393  517451 system_pods.go:89] "coredns-7d764666f9-rkj2k" [f6beeeae-d175-46f3-a73e-8023b7663622] Running
	I1227 20:59:27.341401  517451 system_pods.go:89] "etcd-auto-037975" [70f7cd8f-e106-4135-a7d9-58b42ee396b8] Running
	I1227 20:59:27.341406  517451 system_pods.go:89] "kindnet-ccqpq" [8f504a6b-6ab1-47e2-9117-ee5b272539df] Running
	I1227 20:59:27.341410  517451 system_pods.go:89] "kube-apiserver-auto-037975" [ccd46d26-4414-4fd0-a2d9-8b8041cf2efb] Running
	I1227 20:59:27.341415  517451 system_pods.go:89] "kube-controller-manager-auto-037975" [1a8e3761-f836-4208-862e-c95875274cb0] Running
	I1227 20:59:27.341421  517451 system_pods.go:89] "kube-proxy-jp6cb" [bb840391-13cb-4762-81be-b759ccff79e8] Running
	I1227 20:59:27.341426  517451 system_pods.go:89] "kube-scheduler-auto-037975" [785266d0-11b5-4efb-90c0-3acb68d15ef7] Running
	I1227 20:59:27.341432  517451 system_pods.go:89] "storage-provisioner" [0926543f-2839-40a5-92c4-fb237a5834ae] Running
	I1227 20:59:27.341439  517451 system_pods.go:126] duration metric: took 1.021242747s to wait for k8s-apps to be running ...
	I1227 20:59:27.341473  517451 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:59:27.341539  517451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:59:27.353846  517451 system_svc.go:56] duration metric: took 12.364548ms WaitForService to wait for kubelet
	I1227 20:59:27.353874  517451 kubeadm.go:587] duration metric: took 14.755448997s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:59:27.353892  517451 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:59:27.356663  517451 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:59:27.356691  517451 node_conditions.go:123] node cpu capacity is 2
	I1227 20:59:27.356710  517451 node_conditions.go:105] duration metric: took 2.813151ms to run NodePressure ...
	I1227 20:59:27.356722  517451 start.go:242] waiting for startup goroutines ...
	I1227 20:59:27.356729  517451 start.go:247] waiting for cluster config update ...
	I1227 20:59:27.356741  517451 start.go:256] writing updated cluster config ...
	I1227 20:59:27.357018  517451 ssh_runner.go:195] Run: rm -f paused
	I1227 20:59:27.360332  517451 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:59:27.365225  517451 pod_ready.go:83] waiting for pod "coredns-7d764666f9-rkj2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:27.369804  517451 pod_ready.go:94] pod "coredns-7d764666f9-rkj2k" is "Ready"
	I1227 20:59:27.369830  517451 pod_ready.go:86] duration metric: took 4.582162ms for pod "coredns-7d764666f9-rkj2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:27.372005  517451 pod_ready.go:83] waiting for pod "etcd-auto-037975" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:27.376491  517451 pod_ready.go:94] pod "etcd-auto-037975" is "Ready"
	I1227 20:59:27.376554  517451 pod_ready.go:86] duration metric: took 4.524793ms for pod "etcd-auto-037975" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:27.378791  517451 pod_ready.go:83] waiting for pod "kube-apiserver-auto-037975" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:27.383060  517451 pod_ready.go:94] pod "kube-apiserver-auto-037975" is "Ready"
	I1227 20:59:27.383086  517451 pod_ready.go:86] duration metric: took 4.244767ms for pod "kube-apiserver-auto-037975" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:27.385246  517451 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-037975" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:27.766342  517451 pod_ready.go:94] pod "kube-controller-manager-auto-037975" is "Ready"
	I1227 20:59:27.766373  517451 pod_ready.go:86] duration metric: took 381.10168ms for pod "kube-controller-manager-auto-037975" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:27.966664  517451 pod_ready.go:83] waiting for pod "kube-proxy-jp6cb" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:28.366575  517451 pod_ready.go:94] pod "kube-proxy-jp6cb" is "Ready"
	I1227 20:59:28.366601  517451 pod_ready.go:86] duration metric: took 399.910215ms for pod "kube-proxy-jp6cb" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:28.565778  517451 pod_ready.go:83] waiting for pod "kube-scheduler-auto-037975" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:28.965200  517451 pod_ready.go:94] pod "kube-scheduler-auto-037975" is "Ready"
	I1227 20:59:28.965283  517451 pod_ready.go:86] duration metric: took 399.473329ms for pod "kube-scheduler-auto-037975" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 20:59:28.965318  517451 pod_ready.go:40] duration metric: took 1.604940456s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:59:29.023449  517451 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 20:59:29.026751  517451 out.go:203] 
	W1227 20:59:29.029647  517451 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 20:59:29.032539  517451 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 20:59:29.035487  517451 out.go:179] * Done! kubectl is now configured to use "auto-037975" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.626849941Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.629932155Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.629966435Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.629989664Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.633029393Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.633062901Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.633088739Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.637321346Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.63735675Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.637397914Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.642380016Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.64241405Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.925229237Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=82308411-462a-4fe7-a229-4696ae473445 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.926571363Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=50617e42-81ae-46ed-b988-a1f49c0d9959 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.929933413Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9/dashboard-metrics-scraper" id=691caa11-e914-4133-bbee-b79edf878164 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.930038379Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.937129772Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.937895044Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.95247691Z" level=info msg="Created container 8042f28b87a726dade2ba4ff6db74c840e2d9ebdd3cd33f55037fcc0e835344e: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9/dashboard-metrics-scraper" id=691caa11-e914-4133-bbee-b79edf878164 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.954958894Z" level=info msg="Starting container: 8042f28b87a726dade2ba4ff6db74c840e2d9ebdd3cd33f55037fcc0e835344e" id=e901e4b9-ea46-45b1-a8ac-e96064923bf3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 20:59:25 no-preload-542467 crio[655]: time="2025-12-27T20:59:25.959866347Z" level=info msg="Started container" PID=1725 containerID=8042f28b87a726dade2ba4ff6db74c840e2d9ebdd3cd33f55037fcc0e835344e description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9/dashboard-metrics-scraper id=e901e4b9-ea46-45b1-a8ac-e96064923bf3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e180ede4a28a83ced917078f73b9c474806938f444febea95033b3297e6545a6
	Dec 27 20:59:25 no-preload-542467 conmon[1723]: conmon 8042f28b87a726dade2b <ninfo>: container 1725 exited with status 1
	Dec 27 20:59:26 no-preload-542467 crio[655]: time="2025-12-27T20:59:26.254919793Z" level=info msg="Removing container: 56b7a73675c4449acdc2e70139dd44f2bdf44ab13102bf6e7bcddf97e4adf5b7" id=2f119be4-b9ab-4c70-84d3-470714e6e635 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:59:26 no-preload-542467 crio[655]: time="2025-12-27T20:59:26.279862413Z" level=info msg="Error loading conmon cgroup of container 56b7a73675c4449acdc2e70139dd44f2bdf44ab13102bf6e7bcddf97e4adf5b7: cgroup deleted" id=2f119be4-b9ab-4c70-84d3-470714e6e635 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 20:59:26 no-preload-542467 crio[655]: time="2025-12-27T20:59:26.28991682Z" level=info msg="Removed container 56b7a73675c4449acdc2e70139dd44f2bdf44ab13102bf6e7bcddf97e4adf5b7: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9/dashboard-metrics-scraper" id=2f119be4-b9ab-4c70-84d3-470714e6e635 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8042f28b87a72       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago      Exited              dashboard-metrics-scraper   3                   e180ede4a28a8       dashboard-metrics-scraper-867fb5f87b-ztnm9   kubernetes-dashboard
	e00bd10efcab4       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           21 seconds ago      Running             storage-provisioner         2                   43df2336db731       storage-provisioner                          kube-system
	0b5f1122d2bae       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago      Running             kubernetes-dashboard        0                   7c7c08324fe10       kubernetes-dashboard-b84665fb8-mhlrk         kubernetes-dashboard
	496bbc1fc440e       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           53 seconds ago      Running             coredns                     1                   6a52d515735db       coredns-7d764666f9-p7xs9                     kube-system
	858e54c5f6e5f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago      Running             busybox                     1                   c01c147c22f9e       busybox                                      default
	484a2b95e52a7       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           53 seconds ago      Running             kindnet-cni                 1                   c2401cd248dde       kindnet-2v4p8                                kube-system
	be5c6226a6047       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           53 seconds ago      Exited              storage-provisioner         1                   43df2336db731       storage-provisioner                          kube-system
	bc0280cd97160       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           53 seconds ago      Running             kube-proxy                  1                   5e6ee1f427ce3       kube-proxy-7mx96                             kube-system
	66e8f829d9c3d       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           59 seconds ago      Running             etcd                        1                   52a060879f70e       etcd-no-preload-542467                       kube-system
	c19a656202dee       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           59 seconds ago      Running             kube-apiserver              1                   27c016ee7491f       kube-apiserver-no-preload-542467             kube-system
	161b43e94648c       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           59 seconds ago      Running             kube-controller-manager     1                   08fca48ad28bc       kube-controller-manager-no-preload-542467    kube-system
	87c12835ffb38       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           59 seconds ago      Running             kube-scheduler              1                   578b2ad328b89       kube-scheduler-no-preload-542467             kube-system
	
	
	==> coredns [496bbc1fc440e887ec74fe08c1f48d36953507a7dc1d003f4353ff7944432c2d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:38146 - 27015 "HINFO IN 2764909140976315041.6274439778967615360. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005504893s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-542467
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-542467
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6e57f623aa755d94d4dbfea5d38ce5cfc38d562
	                    minikube.k8s.io/name=no-preload-542467
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T20_57_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 20:57:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-542467
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 20:59:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 20:59:14 +0000   Sat, 27 Dec 2025 20:57:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 20:59:14 +0000   Sat, 27 Dec 2025 20:57:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 20:59:14 +0000   Sat, 27 Dec 2025 20:57:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 20:59:14 +0000   Sat, 27 Dec 2025 20:57:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-542467
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                965c0b17-6aea-4550-9015-e80b58ef7dfe
	  Boot ID:                    4fbc68cf-5489-40de-be5c-64d41c71a395
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-7d764666f9-p7xs9                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     114s
	  kube-system                 etcd-no-preload-542467                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         119s
	  kube-system                 kindnet-2v4p8                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-no-preload-542467              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-no-preload-542467     200m (10%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-7mx96                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-no-preload-542467              100m (5%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-ztnm9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-mhlrk          0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  116s  node-controller  Node no-preload-542467 event: Registered Node no-preload-542467 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node no-preload-542467 event: Registered Node no-preload-542467 in Controller
	
	
	==> dmesg <==
	[Dec27 20:27] overlayfs: idmapped layers are currently not supported
	[  +6.770645] overlayfs: idmapped layers are currently not supported
	[Dec27 20:28] overlayfs: idmapped layers are currently not supported
	[ +25.872751] overlayfs: idmapped layers are currently not supported
	[Dec27 20:29] overlayfs: idmapped layers are currently not supported
	[ +32.997137] overlayfs: idmapped layers are currently not supported
	[Dec27 20:31] overlayfs: idmapped layers are currently not supported
	[Dec27 20:33] overlayfs: idmapped layers are currently not supported
	[ +33.772475] overlayfs: idmapped layers are currently not supported
	[Dec27 20:39] overlayfs: idmapped layers are currently not supported
	[Dec27 20:40] overlayfs: idmapped layers are currently not supported
	[Dec27 20:44] overlayfs: idmapped layers are currently not supported
	[Dec27 20:45] overlayfs: idmapped layers are currently not supported
	[Dec27 20:49] overlayfs: idmapped layers are currently not supported
	[Dec27 20:50] overlayfs: idmapped layers are currently not supported
	[Dec27 20:51] overlayfs: idmapped layers are currently not supported
	[Dec27 20:52] overlayfs: idmapped layers are currently not supported
	[Dec27 20:53] overlayfs: idmapped layers are currently not supported
	[Dec27 20:55] overlayfs: idmapped layers are currently not supported
	[ +57.272039] overlayfs: idmapped layers are currently not supported
	[Dec27 20:57] overlayfs: idmapped layers are currently not supported
	[ +34.093681] overlayfs: idmapped layers are currently not supported
	[Dec27 20:58] overlayfs: idmapped layers are currently not supported
	[ +25.264982] overlayfs: idmapped layers are currently not supported
	[Dec27 20:59] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [66e8f829d9c3d364d238135636c405f3c6255104b333cb219ec600934ec6abd0] <==
	{"level":"info","ts":"2025-12-27T20:58:39.407284Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T20:58:39.407294Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T20:58:39.408314Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T20:58:39.408376Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T20:58:39.408431Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T20:58:39.495429Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T20:58:39.495476Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T20:58:39.495523Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T20:58:39.495538Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:58:39.495553Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T20:58:39.498063Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:58:39.498130Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T20:58:39.498149Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T20:58:39.498159Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T20:58:39.509697Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-542467 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T20:58:39.509836Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:58:39.510747Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:58:39.542703Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T20:58:39.546950Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T20:58:39.547003Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T20:58:39.561482Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T20:58:39.562526Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T20:58:39.563392Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-12-27T20:58:45.138911Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.723837ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:endpointslice-controller\" limit:1 ","response":"range_response_count:1 size:771"}
	{"level":"info","ts":"2025-12-27T20:58:45.139069Z","caller":"traceutil/trace.go:172","msg":"trace[1795043196] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:endpointslice-controller; range_end:; response_count:1; response_revision:534; }","duration":"184.948816ms","start":"2025-12-27T20:58:44.954102Z","end":"2025-12-27T20:58:45.139051Z","steps":["trace[1795043196] 'range keys from bolt db'  (duration: 184.046162ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:59:38 up  2:42,  0 user,  load average: 3.40, 2.48, 2.05
	Linux no-preload-542467 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [484a2b95e52a778cc7edd0ec75e04dd23bf1af7cd989cadb7f6465f524fdddc5] <==
	I1227 20:58:45.287731       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 20:58:45.331432       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 20:58:45.338542       1 main.go:148] setting mtu 1500 for CNI 
	I1227 20:58:45.338642       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 20:58:45.338696       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T20:58:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 20:58:45.630429       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 20:58:45.630459       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 20:58:45.630468       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 20:58:45.630564       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 20:59:15.624985       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 20:59:15.626348       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 20:59:15.630907       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 20:59:15.631017       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1227 20:59:17.231484       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 20:59:17.231523       1 metrics.go:72] Registering metrics
	I1227 20:59:17.231588       1 controller.go:711] "Syncing nftables rules"
	I1227 20:59:25.622087       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:59:25.622128       1 main.go:301] handling current node
	I1227 20:59:35.622137       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 20:59:35.622172       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c19a656202deee3c031169a10d16ce0309d87ad8c5c40f4fe78c299c16484dfb] <==
	I1227 20:58:43.577784       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 20:58:43.578522       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 20:58:43.579278       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 20:58:43.579884       1 aggregator.go:187] initial CRD sync complete...
	I1227 20:58:43.579894       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 20:58:43.579900       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 20:58:43.579906       1 cache.go:39] Caches are synced for autoregister controller
	I1227 20:58:43.590179       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 20:58:43.590299       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 20:58:43.590312       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 20:58:43.590398       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:43.594651       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 20:58:43.620780       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1227 20:58:43.725350       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 20:58:43.790709       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 20:58:43.993963       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 20:58:45.439151       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 20:58:45.683787       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 20:58:45.810998       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 20:58:45.849944       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 20:58:46.036536       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.37.151"}
	I1227 20:58:46.089959       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.180.90"}
	I1227 20:58:47.353002       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 20:58:47.724877       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 20:58:47.794406       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [161b43e94648c1d5a060e54751a6efa923997153605bc1c7b6e51c556ac8e5bf] <==
	I1227 20:58:47.284905       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.284913       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.284919       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.284928       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.284935       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.304882       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.285055       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.306088       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.315759       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-542467"
	I1227 20:58:47.306105       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.306112       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.306122       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.306162       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.306181       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.306074       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.316107       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 20:58:47.306137       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.306175       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.300883       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.316573       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.306130       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.391878       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.406716       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:47.407297       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 20:58:47.407348       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [bc0280cd97160303d86d9f47745f988149d592e81902cb53c092add4f5fb263b] <==
	I1227 20:58:45.523160       1 server_linux.go:53] "Using iptables proxy"
	I1227 20:58:45.736945       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:58:45.837196       1 shared_informer.go:377] "Caches are synced"
	I1227 20:58:45.837227       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 20:58:45.837297       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 20:58:45.902453       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 20:58:45.902568       1 server_linux.go:136] "Using iptables Proxier"
	I1227 20:58:45.909809       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 20:58:45.910191       1 server.go:529] "Version info" version="v1.35.0"
	I1227 20:58:45.910350       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:58:45.912885       1 config.go:200] "Starting service config controller"
	I1227 20:58:45.912957       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 20:58:45.913039       1 config.go:106] "Starting endpoint slice config controller"
	I1227 20:58:45.913074       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 20:58:45.913111       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 20:58:45.913145       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 20:58:45.915012       1 config.go:309] "Starting node config controller"
	I1227 20:58:45.915079       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 20:58:45.915108       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 20:58:46.018156       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 20:58:46.018194       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 20:58:46.018231       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [87c12835ffb381b6fd21e4708c09054907f3433d8bb1508c5f509e1d6dfef79b] <==
	I1227 20:58:40.160866       1 serving.go:386] Generated self-signed cert in-memory
	W1227 20:58:43.066750       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 20:58:43.066846       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 20:58:43.066907       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 20:58:43.066940       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 20:58:43.554836       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 20:58:43.554944       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 20:58:43.570331       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 20:58:43.589532       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 20:58:43.589567       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 20:58:43.589328       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 20:58:43.702036       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 20:59:01 no-preload-542467 kubelet[779]: E1227 20:59:01.181709     779 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-mhlrk" containerName="kubernetes-dashboard"
	Dec 27 20:59:03 no-preload-542467 kubelet[779]: E1227 20:59:03.924329     779 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9" containerName="dashboard-metrics-scraper"
	Dec 27 20:59:03 no-preload-542467 kubelet[779]: I1227 20:59:03.924380     779 scope.go:122] "RemoveContainer" containerID="28ec85aa83fbb60c8eb0a61fc25a3bcca09ba5cf4d98767929c9b737b0932c19"
	Dec 27 20:59:04 no-preload-542467 kubelet[779]: I1227 20:59:04.189126     779 scope.go:122] "RemoveContainer" containerID="28ec85aa83fbb60c8eb0a61fc25a3bcca09ba5cf4d98767929c9b737b0932c19"
	Dec 27 20:59:04 no-preload-542467 kubelet[779]: E1227 20:59:04.189502     779 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9" containerName="dashboard-metrics-scraper"
	Dec 27 20:59:04 no-preload-542467 kubelet[779]: I1227 20:59:04.189541     779 scope.go:122] "RemoveContainer" containerID="56b7a73675c4449acdc2e70139dd44f2bdf44ab13102bf6e7bcddf97e4adf5b7"
	Dec 27 20:59:04 no-preload-542467 kubelet[779]: E1227 20:59:04.189709     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ztnm9_kubernetes-dashboard(42215b40-cb34-4731-b959-ac8858b9baaa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9" podUID="42215b40-cb34-4731-b959-ac8858b9baaa"
	Dec 27 20:59:04 no-preload-542467 kubelet[779]: I1227 20:59:04.226607     779 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-mhlrk" podStartSLOduration=6.380394049 podStartE2EDuration="17.226582995s" podCreationTimestamp="2025-12-27 20:58:47 +0000 UTC" firstStartedPulling="2025-12-27 20:58:48.314530606 +0000 UTC m=+10.576236418" lastFinishedPulling="2025-12-27 20:58:59.160719544 +0000 UTC m=+21.422425364" observedRunningTime="2025-12-27 20:59:00.206423853 +0000 UTC m=+22.468129665" watchObservedRunningTime="2025-12-27 20:59:04.226582995 +0000 UTC m=+26.488288807"
	Dec 27 20:59:08 no-preload-542467 kubelet[779]: E1227 20:59:08.242277     779 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9" containerName="dashboard-metrics-scraper"
	Dec 27 20:59:08 no-preload-542467 kubelet[779]: I1227 20:59:08.242766     779 scope.go:122] "RemoveContainer" containerID="56b7a73675c4449acdc2e70139dd44f2bdf44ab13102bf6e7bcddf97e4adf5b7"
	Dec 27 20:59:08 no-preload-542467 kubelet[779]: E1227 20:59:08.243020     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ztnm9_kubernetes-dashboard(42215b40-cb34-4731-b959-ac8858b9baaa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9" podUID="42215b40-cb34-4731-b959-ac8858b9baaa"
	Dec 27 20:59:16 no-preload-542467 kubelet[779]: I1227 20:59:16.226390     779 scope.go:122] "RemoveContainer" containerID="be5c6226a604721581fbe9641759c616cf40b5597d34625ad5321668ad3f5a6f"
	Dec 27 20:59:19 no-preload-542467 kubelet[779]: E1227 20:59:19.406578     779 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p7xs9" containerName="coredns"
	Dec 27 20:59:25 no-preload-542467 kubelet[779]: E1227 20:59:25.924648     779 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9" containerName="dashboard-metrics-scraper"
	Dec 27 20:59:25 no-preload-542467 kubelet[779]: I1227 20:59:25.924694     779 scope.go:122] "RemoveContainer" containerID="56b7a73675c4449acdc2e70139dd44f2bdf44ab13102bf6e7bcddf97e4adf5b7"
	Dec 27 20:59:26 no-preload-542467 kubelet[779]: I1227 20:59:26.252792     779 scope.go:122] "RemoveContainer" containerID="56b7a73675c4449acdc2e70139dd44f2bdf44ab13102bf6e7bcddf97e4adf5b7"
	Dec 27 20:59:26 no-preload-542467 kubelet[779]: E1227 20:59:26.253094     779 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9" containerName="dashboard-metrics-scraper"
	Dec 27 20:59:26 no-preload-542467 kubelet[779]: I1227 20:59:26.253121     779 scope.go:122] "RemoveContainer" containerID="8042f28b87a726dade2ba4ff6db74c840e2d9ebdd3cd33f55037fcc0e835344e"
	Dec 27 20:59:26 no-preload-542467 kubelet[779]: E1227 20:59:26.253262     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ztnm9_kubernetes-dashboard(42215b40-cb34-4731-b959-ac8858b9baaa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9" podUID="42215b40-cb34-4731-b959-ac8858b9baaa"
	Dec 27 20:59:28 no-preload-542467 kubelet[779]: E1227 20:59:28.242505     779 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9" containerName="dashboard-metrics-scraper"
	Dec 27 20:59:28 no-preload-542467 kubelet[779]: I1227 20:59:28.242980     779 scope.go:122] "RemoveContainer" containerID="8042f28b87a726dade2ba4ff6db74c840e2d9ebdd3cd33f55037fcc0e835344e"
	Dec 27 20:59:28 no-preload-542467 kubelet[779]: E1227 20:59:28.243208     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ztnm9_kubernetes-dashboard(42215b40-cb34-4731-b959-ac8858b9baaa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ztnm9" podUID="42215b40-cb34-4731-b959-ac8858b9baaa"
	Dec 27 20:59:33 no-preload-542467 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 20:59:33 no-preload-542467 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 20:59:33 no-preload-542467 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [0b5f1122d2bae474f14bb22c11d66a7c0b17063ca9cd0fabf5651aa48608c872] <==
	2025/12/27 20:58:59 Using namespace: kubernetes-dashboard
	2025/12/27 20:58:59 Using in-cluster config to connect to apiserver
	2025/12/27 20:58:59 Using secret token for csrf signing
	2025/12/27 20:58:59 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 20:58:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 20:58:59 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 20:58:59 Generating JWE encryption key
	2025/12/27 20:58:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 20:58:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 20:58:59 Initializing JWE encryption key from synchronized object
	2025/12/27 20:58:59 Creating in-cluster Sidecar client
	2025/12/27 20:58:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:58:59 Serving insecurely on HTTP port: 9090
	2025/12/27 20:59:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 20:58:59 Starting overwatch
	
	
	==> storage-provisioner [be5c6226a604721581fbe9641759c616cf40b5597d34625ad5321668ad3f5a6f] <==
	I1227 20:58:45.425145       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 20:59:15.427378       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e00bd10efcab4d9ccd6d7493ae80baa4f6b32616652432abfbc287b063e25f59] <==
	I1227 20:59:16.283196       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 20:59:16.294146       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 20:59:16.294198       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 20:59:16.296884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:59:19.751753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:59:24.013085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:59:27.612226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:59:30.665895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:59:33.688293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:59:33.695713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:59:33.695867       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 20:59:33.696025       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-542467_8ebc9a03-a8df-43b4-a1b1-4d1d9cee23d1!
	I1227 20:59:33.696646       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc82333b-f666-454a-923f-92228b1762ed", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-542467_8ebc9a03-a8df-43b4-a1b1-4d1d9cee23d1 became leader
	W1227 20:59:33.703696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:59:33.707274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 20:59:33.796261       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-542467_8ebc9a03-a8df-43b4-a1b1-4d1d9cee23d1!
	W1227 20:59:35.710925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:59:35.718101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:59:37.721552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 20:59:37.727075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-542467 -n no-preload-542467
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-542467 -n no-preload-542467: exit status 2 (446.894656ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-542467 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.57s)
E1227 21:04:25.800428  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:04:29.602746  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:04:29.608112  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:04:29.618391  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:04:29.638733  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:04:29.679121  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:04:29.759466  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:04:29.919848  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:04:30.240496  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:04:30.881426  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:04:32.162347  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:04:34.722987  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (266/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.47
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.35.0/json-events 3.83
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.09
18 TestDownloadOnly/v1.35.0/DeleteAll 0.2
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.7
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
27 TestAddons/Setup 139.05
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 9.77
48 TestAddons/StoppedEnableDisable 12.39
49 TestCertOptions 27.72
50 TestCertExpiration 223.92
58 TestErrorSpam/setup 27.19
59 TestErrorSpam/start 0.76
60 TestErrorSpam/status 1.08
61 TestErrorSpam/pause 6.64
62 TestErrorSpam/unpause 5.93
63 TestErrorSpam/stop 1.49
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 45.1
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 30.38
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.45
75 TestFunctional/serial/CacheCmd/cache/add_local 1.22
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.77
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 31.95
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.39
86 TestFunctional/serial/LogsFileCmd 1.75
87 TestFunctional/serial/InvalidService 4.78
89 TestFunctional/parallel/ConfigCmd 0.53
90 TestFunctional/parallel/DashboardCmd 11.18
91 TestFunctional/parallel/DryRun 0.45
92 TestFunctional/parallel/InternationalLanguage 0.27
93 TestFunctional/parallel/StatusCmd 1.11
97 TestFunctional/parallel/ServiceCmdConnect 7.59
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 18.45
101 TestFunctional/parallel/SSHCmd 0.7
102 TestFunctional/parallel/CpCmd 2.01
104 TestFunctional/parallel/FileSync 0.36
105 TestFunctional/parallel/CertSync 2.09
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
113 TestFunctional/parallel/License 0.69
114 TestFunctional/parallel/Version/short 0.08
115 TestFunctional/parallel/Version/components 0.82
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.6
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.17
121 TestFunctional/parallel/ImageCommands/Setup 0.72
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.49
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.04
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
128 TestFunctional/parallel/ProfileCmd/profile_list 0.49
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.3
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.55
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.56
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.72
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.48
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.9
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.22
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
146 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
147 TestFunctional/parallel/MountCmd/any-port 7.24
148 TestFunctional/parallel/ServiceCmd/List 0.5
149 TestFunctional/parallel/ServiceCmd/JSONOutput 0.56
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
151 TestFunctional/parallel/ServiceCmd/Format 0.5
152 TestFunctional/parallel/ServiceCmd/URL 0.37
153 TestFunctional/parallel/MountCmd/specific-port 2.58
154 TestFunctional/parallel/MountCmd/VerifyCleanup 2.75
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 133.9
163 TestMultiControlPlane/serial/DeployApp 6.4
164 TestMultiControlPlane/serial/PingHostFromPods 1.38
165 TestMultiControlPlane/serial/AddWorkerNode 29.72
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.03
168 TestMultiControlPlane/serial/CopyFile 19.74
169 TestMultiControlPlane/serial/StopSecondaryNode 12.79
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
171 TestMultiControlPlane/serial/RestartSecondaryNode 21.2
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.26
185 TestJSONOutput/start/Command 44.63
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.88
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 35.28
211 TestKicCustomNetwork/use_default_bridge_network 28.64
212 TestKicExistingNetwork 29.16
213 TestKicCustomSubnet 30.29
214 TestKicStaticIP 29.59
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 61.18
219 TestMountStart/serial/StartWithMountFirst 8.9
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 9.2
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.69
224 TestMountStart/serial/VerifyMountPostDelete 0.25
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 7.93
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 73.75
231 TestMultiNode/serial/DeployApp2Nodes 5.01
232 TestMultiNode/serial/PingHostFrom2Pods 0.92
233 TestMultiNode/serial/AddNode 27.9
234 TestMultiNode/serial/MultiNodeLabels 0.08
235 TestMultiNode/serial/ProfileList 0.7
236 TestMultiNode/serial/CopyFile 10.23
237 TestMultiNode/serial/StopNode 2.4
238 TestMultiNode/serial/StartAfterStop 8.39
239 TestMultiNode/serial/RestartKeepsNodes 80.57
240 TestMultiNode/serial/DeleteNode 5.6
241 TestMultiNode/serial/StopMultiNode 24.12
242 TestMultiNode/serial/RestartMultiNode 49.74
243 TestMultiNode/serial/ValidateNameConflict 29.95
250 TestScheduledStopUnix 102.71
253 TestInsufficientStorage 12.99
254 TestRunningBinaryUpgrade 317.37
256 TestKubernetesUpgrade 354.84
257 TestMissingContainerUpgrade 116.15
259 TestPause/serial/Start 52.3
260 TestPause/serial/SecondStartNoReconfiguration 29.16
262 TestStoppedBinaryUpgrade/Setup 1.85
263 TestStoppedBinaryUpgrade/Upgrade 316.27
264 TestStoppedBinaryUpgrade/MinikubeLogs 1.24
272 TestPreload/Start-NoPreload-PullImage 72.51
273 TestPreload/Restart-With-Preload-Check-User-Image 45.09
276 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
277 TestNoKubernetes/serial/StartWithK8s 29.35
278 TestNoKubernetes/serial/StartWithStopK8s 17.12
279 TestNoKubernetes/serial/Start 7.77
280 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
282 TestNoKubernetes/serial/ProfileList 1
283 TestNoKubernetes/serial/Stop 1.29
284 TestNoKubernetes/serial/StartNoArgs 7.18
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.73
293 TestNetworkPlugins/group/false 4
298 TestStartStop/group/old-k8s-version/serial/FirstStart 61.66
299 TestStartStop/group/old-k8s-version/serial/DeployApp 8.43
301 TestStartStop/group/old-k8s-version/serial/Stop 12
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
303 TestStartStop/group/old-k8s-version/serial/SecondStart 53.35
304 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
305 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
306 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
309 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 46.71
310 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.31
312 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.02
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
314 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.58
315 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
316 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
317 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
320 TestStartStop/group/embed-certs/serial/FirstStart 41.7
321 TestStartStop/group/embed-certs/serial/DeployApp 8.3
323 TestStartStop/group/embed-certs/serial/Stop 12
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
325 TestStartStop/group/embed-certs/serial/SecondStart 50.97
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
327 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
328 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
331 TestStartStop/group/no-preload/serial/FirstStart 59.58
333 TestStartStop/group/newest-cni/serial/FirstStart 34.4
334 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/no-preload/serial/DeployApp 9.44
337 TestStartStop/group/newest-cni/serial/Stop 1.51
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
339 TestStartStop/group/newest-cni/serial/SecondStart 14.22
341 TestStartStop/group/no-preload/serial/Stop 12.43
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
346 TestPreload/PreloadSrc/gcs 5.36
347 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
348 TestStartStop/group/no-preload/serial/SecondStart 52.31
349 TestPreload/PreloadSrc/github 7.07
350 TestPreload/PreloadSrc/gcs-cached 0.64
351 TestNetworkPlugins/group/auto/Start 48
352 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
353 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
354 TestNetworkPlugins/group/auto/KubeletFlags 0.29
355 TestNetworkPlugins/group/auto/NetCatPod 9.28
356 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
358 TestNetworkPlugins/group/auto/DNS 0.21
359 TestNetworkPlugins/group/auto/Localhost 0.18
360 TestNetworkPlugins/group/auto/HairPin 0.21
361 TestNetworkPlugins/group/kindnet/Start 49.36
362 TestNetworkPlugins/group/calico/Start 62.29
363 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
364 TestNetworkPlugins/group/kindnet/KubeletFlags 0.5
365 TestNetworkPlugins/group/kindnet/NetCatPod 11.42
366 TestNetworkPlugins/group/kindnet/DNS 0.22
367 TestNetworkPlugins/group/kindnet/Localhost 0.18
368 TestNetworkPlugins/group/kindnet/HairPin 0.18
369 TestNetworkPlugins/group/calico/ControllerPod 6.01
370 TestNetworkPlugins/group/calico/KubeletFlags 0.43
371 TestNetworkPlugins/group/calico/NetCatPod 13.33
372 TestNetworkPlugins/group/custom-flannel/Start 55.09
373 TestNetworkPlugins/group/calico/DNS 0.2
374 TestNetworkPlugins/group/calico/Localhost 0.17
375 TestNetworkPlugins/group/calico/HairPin 0.16
376 TestNetworkPlugins/group/enable-default-cni/Start 65.46
377 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
378 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.33
379 TestNetworkPlugins/group/custom-flannel/DNS 0.17
380 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
381 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
382 TestNetworkPlugins/group/flannel/Start 53.78
383 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
384 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.29
385 TestNetworkPlugins/group/enable-default-cni/DNS 0.31
386 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
387 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
388 TestNetworkPlugins/group/bridge/Start 67.22
389 TestNetworkPlugins/group/flannel/ControllerPod 6.02
390 TestNetworkPlugins/group/flannel/KubeletFlags 0.35
391 TestNetworkPlugins/group/flannel/NetCatPod 12.31
392 TestNetworkPlugins/group/flannel/DNS 0.21
393 TestNetworkPlugins/group/flannel/Localhost 0.18
394 TestNetworkPlugins/group/flannel/HairPin 0.19
395 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
396 TestNetworkPlugins/group/bridge/NetCatPod 9.27
397 TestNetworkPlugins/group/bridge/DNS 0.16
398 TestNetworkPlugins/group/bridge/Localhost 0.13
399 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (5.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-536076 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-536076 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.465478057s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1227 19:55:28.001757  274336 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1227 19:55:28.001858  274336 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-536076
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-536076: exit status 85 (86.706197ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-536076 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-536076 │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 19:55:22
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 19:55:22.580686  274342 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:55:22.580807  274342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:55:22.580817  274342 out.go:374] Setting ErrFile to fd 2...
	I1227 19:55:22.580822  274342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:55:22.581179  274342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	W1227 19:55:22.581632  274342 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22332-272475/.minikube/config/config.json: open /home/jenkins/minikube-integration/22332-272475/.minikube/config/config.json: no such file or directory
	I1227 19:55:22.582110  274342 out.go:368] Setting JSON to true
	I1227 19:55:22.582940  274342 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5875,"bootTime":1766859448,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 19:55:22.583042  274342 start.go:143] virtualization:  
	I1227 19:55:22.589347  274342 out.go:99] [download-only-536076] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1227 19:55:22.589591  274342 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball: no such file or directory
	I1227 19:55:22.589684  274342 notify.go:221] Checking for updates...
	I1227 19:55:22.593040  274342 out.go:171] MINIKUBE_LOCATION=22332
	I1227 19:55:22.596431  274342 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 19:55:22.599883  274342 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 19:55:22.603009  274342 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 19:55:22.606055  274342 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1227 19:55:22.612029  274342 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 19:55:22.612308  274342 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 19:55:22.634883  274342 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 19:55:22.634980  274342 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 19:55:22.692483  274342 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-27 19:55:22.683290161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 19:55:22.692593  274342 docker.go:319] overlay module found
	I1227 19:55:22.695729  274342 out.go:99] Using the docker driver based on user configuration
	I1227 19:55:22.695779  274342 start.go:309] selected driver: docker
	I1227 19:55:22.695788  274342 start.go:928] validating driver "docker" against <nil>
	I1227 19:55:22.695891  274342 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 19:55:22.761412  274342 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-27 19:55:22.752596688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 19:55:22.761650  274342 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 19:55:22.761949  274342 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1227 19:55:22.762133  274342 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 19:55:22.765241  274342 out.go:171] Using Docker driver with root privileges
	I1227 19:55:22.768233  274342 cni.go:84] Creating CNI manager for ""
	I1227 19:55:22.768319  274342 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 19:55:22.768336  274342 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 19:55:22.768428  274342 start.go:353] cluster config:
	{Name:download-only-536076 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-536076 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 19:55:22.771573  274342 out.go:99] Starting "download-only-536076" primary control-plane node in "download-only-536076" cluster
	I1227 19:55:22.771607  274342 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 19:55:22.774617  274342 out.go:99] Pulling base image v0.0.48-1766570851-22316 ...
	I1227 19:55:22.774702  274342 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 19:55:22.774741  274342 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 19:55:22.790473  274342 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 19:55:22.790691  274342 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory
	I1227 19:55:22.790781  274342 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 19:55:22.834236  274342 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1227 19:55:22.834270  274342 cache.go:65] Caching tarball of preloaded images
	I1227 19:55:22.834447  274342 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 19:55:22.837920  274342 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1227 19:55:22.837954  274342 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1227 19:55:22.837963  274342 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1227 19:55:22.917428  274342 preload.go:313] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1227 19:55:22.917577  274342 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1227 19:55:26.654980  274342 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1227 19:55:26.655374  274342 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/download-only-536076/config.json ...
	I1227 19:55:26.655408  274342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/download-only-536076/config.json: {Name:mkcba9c3372b00cedbd66c3c7efec1ce3a40f81f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:55:26.655584  274342 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 19:55:26.655763  274342 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22332-272475/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-536076 host does not exist
	  To start a cluster, run: "minikube start -p download-only-536076"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-536076
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (3.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-540569 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-540569 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.830762587s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (3.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1227 19:55:32.273701  274336 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
I1227 19:55:32.273736  274336 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-540569
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-540569: exit status 85 (93.200441ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-536076 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-536076 │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ delete  │ -p download-only-536076                                                                                                                                                   │ download-only-536076 │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ start   │ -o=json --download-only -p download-only-540569 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-540569 │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 19:55:28
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 19:55:28.483526  274541 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:55:28.483646  274541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:55:28.483658  274541 out.go:374] Setting ErrFile to fd 2...
	I1227 19:55:28.483664  274541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:55:28.483908  274541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 19:55:28.484300  274541 out.go:368] Setting JSON to true
	I1227 19:55:28.485060  274541 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5881,"bootTime":1766859448,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 19:55:28.485126  274541 start.go:143] virtualization:  
	I1227 19:55:28.488422  274541 out.go:99] [download-only-540569] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 19:55:28.488709  274541 notify.go:221] Checking for updates...
	I1227 19:55:28.491551  274541 out.go:171] MINIKUBE_LOCATION=22332
	I1227 19:55:28.494763  274541 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 19:55:28.497691  274541 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 19:55:28.500515  274541 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 19:55:28.504001  274541 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1227 19:55:28.509456  274541 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 19:55:28.509717  274541 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 19:55:28.538810  274541 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 19:55:28.538917  274541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 19:55:28.599990  274541 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-27 19:55:28.590540827 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 19:55:28.600090  274541 docker.go:319] overlay module found
	I1227 19:55:28.603020  274541 out.go:99] Using the docker driver based on user configuration
	I1227 19:55:28.603058  274541 start.go:309] selected driver: docker
	I1227 19:55:28.603074  274541 start.go:928] validating driver "docker" against <nil>
	I1227 19:55:28.603187  274541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 19:55:28.665871  274541 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-27 19:55:28.656473243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 19:55:28.666042  274541 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 19:55:28.666314  274541 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1227 19:55:28.666473  274541 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 19:55:28.669670  274541 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-540569 host does not exist
	  To start a cluster, run: "minikube start -p download-only-540569"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-540569
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.7s)

                                                
                                                
=== RUN   TestBinaryMirror
I1227 19:55:33.379333  274336 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-845718 --alsologtostderr --binary-mirror http://127.0.0.1:41507 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-845718" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-845718
--- PASS: TestBinaryMirror (0.70s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-686526
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-686526: exit status 85 (84.856378ms)

                                                
                                                
-- stdout --
	* Profile "addons-686526" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-686526"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-686526
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-686526: exit status 85 (98.226386ms)

                                                
                                                
-- stdout --
	* Profile "addons-686526" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-686526"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (139.05s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-686526 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-686526 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m19.053563435s)
--- PASS: TestAddons/Setup (139.05s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-686526 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-686526 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.77s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-686526 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-686526 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c46cbbe8-0670-486c-bd01-4937292ed561] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [c46cbbe8-0670-486c-bd01-4937292ed561] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003121126s
addons_test.go:696: (dbg) Run:  kubectl --context addons-686526 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-686526 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-686526 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-686526 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.77s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-686526
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-686526: (12.102489894s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-686526
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-686526
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-686526
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestCertOptions (27.72s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-765175 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-765175 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.004226296s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-765175 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-765175 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-765175 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-765175" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-765175
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-765175: (2.030174501s)
--- PASS: TestCertOptions (27.72s)

                                                
                                    
x
+
TestCertExpiration (223.92s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-629954 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-629954 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.116684347s)
E1227 20:47:13.975260  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:47:54.129958  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-629954 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-629954 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.399115069s)
helpers_test.go:176: Cleaning up "cert-expiration-629954" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-629954
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-629954: (2.400605295s)
--- PASS: TestCertExpiration (223.92s)

                                                
                                    
x
+
TestErrorSpam/setup (27.19s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-710280 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-710280 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-710280 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-710280 --driver=docker  --container-runtime=crio: (27.189084762s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0."
--- PASS: TestErrorSpam/setup (27.19s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.08s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 status
--- PASS: TestErrorSpam/status (1.08s)

                                                
                                    
x
+
TestErrorSpam/pause (6.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 pause: exit status 80 (1.925518692s)

                                                
                                                
-- stdout --
	* Pausing node nospam-710280 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:59:46Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 pause: exit status 80 (2.303150046s)

                                                
                                                
-- stdout --
	* Pausing node nospam-710280 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:59:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 pause: exit status 80 (2.408458595s)

                                                
                                                
-- stdout --
	* Pausing node nospam-710280 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:59:51Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.93s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 unpause: exit status 80 (1.823557384s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-710280 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:59:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 unpause: exit status 80 (2.128381311s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-710280 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:59:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 unpause: exit status 80 (1.974876604s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-710280 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T19:59:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.93s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 stop: (1.294893068s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-710280 --log_dir /tmp/nospam-710280 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22332-272475/.minikube/files/etc/test/nested/copy/274336/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.1s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-arm64 start -p functional-425652 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2244: (dbg) Done: out/minikube-linux-arm64 start -p functional-425652 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (45.098397536s)
--- PASS: TestFunctional/serial/StartWithProxy (45.10s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.38s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1227 20:00:48.722204  274336 config.go:182] Loaded profile config "functional-425652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-425652 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-425652 --alsologtostderr -v=8: (30.376559157s)
functional_test.go:678: soft start took 30.377070445s for "functional-425652" cluster.
I1227 20:01:19.099045  274336 config.go:182] Loaded profile config "functional-425652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (30.38s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-425652 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-425652 cache add registry.k8s.io/pause:3.1: (1.147672637s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-425652 cache add registry.k8s.io/pause:3.3: (1.188363102s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 cache add registry.k8s.io/pause:latest
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-425652 cache add registry.k8s.io/pause:latest: (1.109058636s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-425652 /tmp/TestFunctionalserialCacheCmdcacheadd_local3856156073/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 cache add minikube-local-cache-test:functional-425652
functional_test.go:1114: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 cache delete minikube-local-cache-test:functional-425652
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-425652
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-425652 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (283.831765ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 kubectl -- --context functional-425652 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-425652 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.95s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-425652 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-425652 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.953138045s)
functional_test.go:776: restart took 31.953230888s for "functional-425652" cluster.
I1227 20:01:58.442966  274336 config.go:182] Loaded profile config "functional-425652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (31.95s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-425652 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-arm64 -p functional-425652 logs: (1.392816008s)
--- PASS: TestFunctional/serial/LogsCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 logs --file /tmp/TestFunctionalserialLogsFileCmd1084290535/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-arm64 -p functional-425652 logs --file /tmp/TestFunctionalserialLogsFileCmd1084290535/001/logs.txt: (1.745984481s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.75s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.78s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-425652 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-425652
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-425652: exit status 115 (389.970162ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32066 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-425652 delete -f testdata/invalidsvc.yaml
functional_test.go:2337: (dbg) Done: kubectl --context functional-425652 delete -f testdata/invalidsvc.yaml: (1.125433071s)
--- PASS: TestFunctional/serial/InvalidService (4.78s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-425652 config get cpus: exit status 14 (83.932642ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-425652 config get cpus: exit status 14 (98.601664ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-425652 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-425652 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 299519: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.18s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-arm64 start -p functional-425652 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-425652 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (200.832ms)

                                                
                                                
-- stdout --
	* [functional-425652] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:02:34.874068  297907 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:02:34.874252  297907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:02:34.874279  297907 out.go:374] Setting ErrFile to fd 2...
	I1227 20:02:34.874299  297907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:02:34.874586  297907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:02:34.874983  297907 out.go:368] Setting JSON to false
	I1227 20:02:34.875898  297907 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6307,"bootTime":1766859448,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:02:34.875993  297907 start.go:143] virtualization:  
	I1227 20:02:34.879459  297907 out.go:179] * [functional-425652] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:02:34.883212  297907 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:02:34.883317  297907 notify.go:221] Checking for updates...
	I1227 20:02:34.889027  297907 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:02:34.892006  297907 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:02:34.894896  297907 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:02:34.897735  297907 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:02:34.900614  297907 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:02:34.903922  297907 config.go:182] Loaded profile config "functional-425652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:02:34.904566  297907 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:02:34.935584  297907 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:02:34.935703  297907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:02:35.002383  297907 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 20:02:34.99260825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:02:35.002503  297907 docker.go:319] overlay module found
	I1227 20:02:35.005536  297907 out.go:179] * Using the docker driver based on existing profile
	I1227 20:02:35.008412  297907 start.go:309] selected driver: docker
	I1227 20:02:35.008432  297907 start.go:928] validating driver "docker" against &{Name:functional-425652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-425652 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:02:35.008542  297907 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:02:35.012013  297907 out.go:203] 
	W1227 20:02:35.014923  297907 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1227 20:02:35.017766  297907 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 start -p functional-425652 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 start -p functional-425652 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-425652 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (267.654042ms)

                                                
                                                
-- stdout --
	* [functional-425652] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:02:41.076679  298988 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:02:41.076883  298988 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:02:41.076904  298988 out.go:374] Setting ErrFile to fd 2...
	I1227 20:02:41.076927  298988 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:02:41.077389  298988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:02:41.077858  298988 out.go:368] Setting JSON to false
	I1227 20:02:41.078893  298988 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6313,"bootTime":1766859448,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:02:41.078961  298988 start.go:143] virtualization:  
	I1227 20:02:41.082374  298988 out.go:179] * [functional-425652] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1227 20:02:41.086297  298988 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:02:41.086521  298988 notify.go:221] Checking for updates...
	I1227 20:02:41.092145  298988 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:02:41.096282  298988 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:02:41.099172  298988 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:02:41.101986  298988 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:02:41.104896  298988 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:02:41.108829  298988 config.go:182] Loaded profile config "functional-425652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:02:41.109351  298988 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:02:41.163224  298988 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:02:41.163356  298988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:02:41.272056  298988 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 20:02:41.260973617 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:02:41.272176  298988 docker.go:319] overlay module found
	I1227 20:02:41.275345  298988 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1227 20:02:41.278198  298988 start.go:309] selected driver: docker
	I1227 20:02:41.278222  298988 start.go:928] validating driver "docker" against &{Name:functional-425652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-425652 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:02:41.278337  298988 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:02:41.281992  298988 out.go:203] 
	W1227 20:02:41.284939  298988 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1227 20:02:41.287842  298988 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-425652 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-425652 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-lj8p9" [ee4b0504-5218-4f50-944d-e0b5c76875dc] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-lj8p9" [ee4b0504-5218-4f50-944d-e0b5c76875dc] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003416628s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:30843
functional_test.go:1685: http://192.168.49.2:30843: success! body:
Request served by hello-node-connect-5d95464fd4-lj8p9

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30843
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (18.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [e028568c-3948-4896-bfb3-09f51c3e2f0f] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003557238s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-425652 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-425652 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-425652 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-425652 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [a1c8c04c-53b8-4074-bf42-8d9e5cd969c7] Pending
helpers_test.go:353: "sp-pod" [a1c8c04c-53b8-4074-bf42-8d9e5cd969c7] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003260572s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-425652 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-425652 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-425652 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [0def6e14-0ccd-474d-a6dc-4bd447c318b6] Pending
helpers_test.go:353: "sp-pod" [0def6e14-0ccd-474d-a6dc-4bd447c318b6] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.00350632s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-425652 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (18.45s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh -n functional-425652 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 cp functional-425652:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1668357148/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh -n functional-425652 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh -n functional-425652 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/274336/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh "sudo cat /etc/test/nested/copy/274336/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/274336.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh "sudo cat /etc/ssl/certs/274336.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/274336.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh "sudo cat /usr/share/ca-certificates/274336.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/2743362.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh "sudo cat /etc/ssl/certs/2743362.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/2743362.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh "sudo cat /usr/share/ca-certificates/2743362.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-425652 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-425652 ssh "sudo systemctl is-active docker": exit status 1 (367.432317ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh "sudo systemctl is-active containerd"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-425652 ssh "sudo systemctl is-active containerd": exit status 1 (365.847178ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-425652 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-425652
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-425652
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-425652 image ls --format short --alsologtostderr:
I1227 20:02:49.421990  300508 out.go:360] Setting OutFile to fd 1 ...
I1227 20:02:49.422188  300508 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:02:49.422202  300508 out.go:374] Setting ErrFile to fd 2...
I1227 20:02:49.422209  300508 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:02:49.422462  300508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
I1227 20:02:49.423076  300508 config.go:182] Loaded profile config "functional-425652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:02:49.423194  300508 config.go:182] Loaded profile config "functional-425652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:02:49.423706  300508 cli_runner.go:164] Run: docker container inspect functional-425652 --format={{.State.Status}}
I1227 20:02:49.447201  300508 ssh_runner.go:195] Run: systemctl --version
I1227 20:02:49.447253  300508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-425652
I1227 20:02:49.473700  300508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/functional-425652/id_rsa Username:docker}
I1227 20:02:49.576157  300508 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-425652 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ c96ee3c174987 │ 108MB  │
│ gcr.io/k8s-minikube/busybox                       │ latest                                │ 71a676dd070f4 │ 1.63MB │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ ba04bb24b9575 │ 29MB   │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ 962dbbc0e55ec │ 55.1MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ 271e49a0ebc56 │ 60.9MB │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ 1611cd07b61d5 │ 3.77MB │
│ localhost/minikube-local-cache-test               │ functional-425652                     │ b6fade0912b2e │ 3.33kB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ c3fcf259c473a │ 85MB   │
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ de369f46c2ff5 │ 74.1MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ ddc8422d4d35a │ 49.8MB │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/pause                             │ 3.10.1                                │ d7b100cd9a77b │ 520kB  │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-425652                     │ ce2d2cda2d858 │ 4.79MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest                                │ ce2d2cda2d858 │ 4.79MB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ 88898f1d1a62a │ 72.2MB │
│ registry.k8s.io/pause                             │ 3.1                                   │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                             │ 3.3                                   │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                             │ latest                                │ 8cb2091f603e7 │ 246kB  │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-425652 image ls --format table --alsologtostderr:
I1227 20:02:53.058061  300802 out.go:360] Setting OutFile to fd 1 ...
I1227 20:02:53.058230  300802 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:02:53.058236  300802 out.go:374] Setting ErrFile to fd 2...
I1227 20:02:53.058242  300802 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:02:53.058604  300802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
I1227 20:02:53.059616  300802 config.go:182] Loaded profile config "functional-425652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:02:53.059769  300802 config.go:182] Loaded profile config "functional-425652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:02:53.060575  300802 cli_runner.go:164] Run: docker container inspect functional-425652 --format={{.State.Status}}
I1227 20:02:53.083375  300802 ssh_runner.go:195] Run: systemctl --version
I1227 20:02:53.083440  300802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-425652
I1227 20:02:53.105098  300802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/functional-425652/id_rsa Username:docker}
I1227 20:02:53.223091  300802 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-425652 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a
944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-425652","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4788229"},{"id":"b6fade0912b2e13021c20a99e449287176bbaa21648a5884982ad238be414864","repoDigests":["localhost/minikube-local-cache-test@sha256:a02bbdb9b59bfcd7af9be10c31a491807447e472e516c890c6d5d1a2d551452a"],"repoTags":["localhost/minikube-local-cache-test:functional-425652"],"size":"3330"},{"id":"e08f4d9d2e6ede8185064c13b41f8e
eee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:061d470c1ad66ac12ef70502f257dfb1771cb45ea840d875ef53781a61e81503","registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"72170321"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e7
9d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13","repoDigests":["docker.io/kindest/k
indnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae","docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"108362109"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67","repoDigests":["public.ecr.aws/nginx/nginx@sha256:7cf0c9cc3c6b7ce30b46fa0fe53d95bee9d7803900edb965d3995ddf9ae12d03","public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55077764"},{"id":"c3fcf259c473a57a5d7da116e29
161904491091743512d27467c907c5516f856","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3","registry.k8s.io/kube-apiserver@sha256:bd1ea721ef1552db1884b5e8753c61667620556e5e0bfe6be8b32b6a77d7a16d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"85015535"},{"id":"de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5","repoDigests":["registry.k8s.io/kube-proxy@sha256:817c21201edf58f5fe5be560c11178a250f7ba08a010a4cb73efcb0d98b467a5","registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"74106775"},{"id":"271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":["registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890","registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"si
ze":"60850387"},{"id":"ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f","registry.k8s.io/kube-scheduler@sha256:36fe4e2d4335ff20aa335e673e7490151d57ffa753ef9282b8786930e6014ee3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"49822549"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-425652 image ls --format json --alsologtostderr:
I1227 20:02:52.769331  300758 out.go:360] Setting OutFile to fd 1 ...
I1227 20:02:52.769457  300758 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:02:52.769471  300758 out.go:374] Setting ErrFile to fd 2...
I1227 20:02:52.769477  300758 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:02:52.769790  300758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
I1227 20:02:52.770494  300758 config.go:182] Loaded profile config "functional-425652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:02:52.770621  300758 config.go:182] Loaded profile config "functional-425652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:02:52.771103  300758 cli_runner.go:164] Run: docker container inspect functional-425652 --format={{.State.Status}}
I1227 20:02:52.790897  300758 ssh_runner.go:195] Run: systemctl --version
I1227 20:02:52.790972  300758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-425652
I1227 20:02:52.811208  300758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/functional-425652/id_rsa Username:docker}
I1227 20:02:52.916835  300758 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-425652 image ls --format yaml --alsologtostderr:
- id: 271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
- registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "60850387"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: b6fade0912b2e13021c20a99e449287176bbaa21648a5884982ad238be414864
repoDigests:
- localhost/minikube-local-cache-test@sha256:a02bbdb9b59bfcd7af9be10c31a491807447e472e516c890c6d5d1a2d551452a
repoTags:
- localhost/minikube-local-cache-test:functional-425652
size: "3330"
- id: ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
- registry.k8s.io/kube-scheduler@sha256:36fe4e2d4335ff20aa335e673e7490151d57ffa753ef9282b8786930e6014ee3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "49822549"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
- docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "108362109"
- id: 88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:061d470c1ad66ac12ef70502f257dfb1771cb45ea840d875ef53781a61e81503
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "72170321"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
- registry.k8s.io/kube-apiserver@sha256:bd1ea721ef1552db1884b5e8753c61667620556e5e0bfe6be8b32b6a77d7a16d
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "85015535"
- id: de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:817c21201edf58f5fe5be560c11178a250f7ba08a010a4cb73efcb0d98b467a5
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "74106775"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-425652
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4788229"
- id: 962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:7cf0c9cc3c6b7ce30b46fa0fe53d95bee9d7803900edb965d3995ddf9ae12d03
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55077764"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-425652 image ls --format yaml --alsologtostderr:
I1227 20:02:52.531135  300722 out.go:360] Setting OutFile to fd 1 ...
I1227 20:02:52.531274  300722 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:02:52.531287  300722 out.go:374] Setting ErrFile to fd 2...
I1227 20:02:52.531295  300722 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:02:52.531586  300722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
I1227 20:02:52.532219  300722 config.go:182] Loaded profile config "functional-425652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:02:52.532383  300722 config.go:182] Loaded profile config "functional-425652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:02:52.532936  300722 cli_runner.go:164] Run: docker container inspect functional-425652 --format={{.State.Status}}
I1227 20:02:52.550112  300722 ssh_runner.go:195] Run: systemctl --version
I1227 20:02:52.550171  300722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-425652
I1227 20:02:52.574698  300722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/functional-425652/id_rsa Username:docker}
I1227 20:02:52.675966  300722 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-425652 ssh pgrep buildkitd: exit status 1 (369.843404ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 image build -t localhost/my-image:functional-425652 testdata/build --alsologtostderr
2025/12/27 20:02:52 [DEBUG] GET http://127.0.0.1:34177/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-425652 image build -t localhost/my-image:functional-425652 testdata/build --alsologtostderr: (3.554175106s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-425652 image build -t localhost/my-image:functional-425652 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ef4ca76a2ee
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-425652
--> f5f740a76fd
Successfully tagged localhost/my-image:functional-425652
f5f740a76fd768cfb6639cee6eb0a0c4b1acc10d7831874ded62268b71a689d1
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-425652 image build -t localhost/my-image:functional-425652 testdata/build --alsologtostderr:
I1227 20:02:50.391446  300628 out.go:360] Setting OutFile to fd 1 ...
I1227 20:02:50.392252  300628 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:02:50.392301  300628 out.go:374] Setting ErrFile to fd 2...
I1227 20:02:50.392322  300628 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:02:50.392636  300628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
I1227 20:02:50.393415  300628 config.go:182] Loaded profile config "functional-425652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:02:50.394204  300628 config.go:182] Loaded profile config "functional-425652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 20:02:50.394871  300628 cli_runner.go:164] Run: docker container inspect functional-425652 --format={{.State.Status}}
I1227 20:02:50.433999  300628 ssh_runner.go:195] Run: systemctl --version
I1227 20:02:50.434465  300628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-425652
I1227 20:02:50.461577  300628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/functional-425652/id_rsa Username:docker}
I1227 20:02:50.597884  300628 build_images.go:162] Building image from path: /tmp/build.2428773710.tar
I1227 20:02:50.597951  300628 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1227 20:02:50.609911  300628 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2428773710.tar
I1227 20:02:50.632394  300628 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2428773710.tar: stat -c "%s %y" /var/lib/minikube/build/build.2428773710.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2428773710.tar': No such file or directory
I1227 20:02:50.632426  300628 ssh_runner.go:362] scp /tmp/build.2428773710.tar --> /var/lib/minikube/build/build.2428773710.tar (3072 bytes)
I1227 20:02:50.672640  300628 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2428773710
I1227 20:02:50.683221  300628 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2428773710 -xf /var/lib/minikube/build/build.2428773710.tar
I1227 20:02:50.693078  300628 crio.go:315] Building image: /var/lib/minikube/build/build.2428773710
I1227 20:02:50.693200  300628 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-425652 /var/lib/minikube/build/build.2428773710 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1227 20:02:53.852621  300628 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-425652 /var/lib/minikube/build/build.2428773710 --cgroup-manager=cgroupfs: (3.159369052s)
I1227 20:02:53.852691  300628 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2428773710
I1227 20:02:53.860622  300628 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2428773710.tar
I1227 20:02:53.867781  300628 build_images.go:218] Built localhost/my-image:functional-425652 from /tmp/build.2428773710.tar
I1227 20:02:53.867813  300628 build_images.go:134] succeeded building to: functional-425652
I1227 20:02:53.867819  300628 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-425652
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-425652 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-425652 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-425652 --alsologtostderr: (1.200109842s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-425652 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1335: Took "406.285854ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1349: Took "82.816857ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-425652
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-425652 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1386: Took "479.738761ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1399: Took "74.749606ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-425652 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-425652 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-425652 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-425652 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-425652 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 296892: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-425652 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-425652 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-425652 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [172d5402-e590-46a7-8294-11d32d4ce3d3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [172d5402-e590-46a7-8294-11d32d4ce3d3] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.00346177s
I1227 20:02:23.444758  274336 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-425652
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-425652 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-linux-arm64 -p functional-425652 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-425652 --alsologtostderr: (1.168891089s)
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-425652
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-425652 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.173.193 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-425652 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-425652 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-425652 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-8wdwg" [89f28e1e-61a1-44ff-8aeb-6261af007f1b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-8wdwg" [89f28e1e-61a1-44ff-8aeb-6261af007f1b] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003944439s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-425652 /tmp/TestFunctionalparallelMountCmdany-port1919039674/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766865755280280083" to /tmp/TestFunctionalparallelMountCmdany-port1919039674/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766865755280280083" to /tmp/TestFunctionalparallelMountCmdany-port1919039674/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766865755280280083" to /tmp/TestFunctionalparallelMountCmdany-port1919039674/001/test-1766865755280280083
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-425652 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (336.934255ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 20:02:35.617551  274336 retry.go:84] will retry after 500ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 27 20:02 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 27 20:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 27 20:02 test-1766865755280280083
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh cat /mount-9p/test-1766865755280280083
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-425652 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [a6ca3430-389b-4935-affc-6421a4a8e93a] Pending
helpers_test.go:353: "busybox-mount" [a6ca3430-389b-4935-affc-6421a4a8e93a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [a6ca3430-389b-4935-affc-6421a4a8e93a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [a6ca3430-389b-4935-affc-6421a4a8e93a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004098125s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-425652 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-425652 /tmp/TestFunctionalparallelMountCmdany-port1919039674/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 service list -o json
functional_test.go:1509: Took "561.368883ms" to run "out/minikube-linux-arm64 -p functional-425652 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:30222
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:30222
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-425652 /tmp/TestFunctionalparallelMountCmdspecific-port3959474073/001:/mount-9p --alsologtostderr -v=1 --port 45317]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-425652 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (516.902986ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 20:02:43.034759  274336 retry.go:84] will retry after 700ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-425652 /tmp/TestFunctionalparallelMountCmdspecific-port3959474073/001:/mount-9p --alsologtostderr -v=1 --port 45317] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-425652 ssh "sudo umount -f /mount-9p": exit status 1 (353.441567ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-425652 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-425652 /tmp/TestFunctionalparallelMountCmdspecific-port3959474073/001:/mount-9p --alsologtostderr -v=1 --port 45317] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-425652 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4099494147/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-425652 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4099494147/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-425652 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4099494147/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-425652 ssh "findmnt -T" /mount1: exit status 1 (1.042076035s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 20:02:46.155354  274336 retry.go:84] will retry after 500ms: exit status 1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-425652 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-425652 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-425652 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4099494147/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-425652 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4099494147/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-425652 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4099494147/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.75s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
E1227 20:02:54.130229  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:54.135485  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:54.145656  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-425652
E1227 20:02:54.165797  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-425652
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-425652
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (133.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1227 20:02:59.249916  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:03:04.370198  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:03:14.610681  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:03:35.091569  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:04:16.051809  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-422549 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m13.043044409s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (133.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-422549 kubectl -- rollout status deployment/busybox: (3.648720058s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 kubectl -- exec busybox-769dd8b7dd-k7ks6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 kubectl -- exec busybox-769dd8b7dd-qcz4b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 kubectl -- exec busybox-769dd8b7dd-v6vks -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 kubectl -- exec busybox-769dd8b7dd-k7ks6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 kubectl -- exec busybox-769dd8b7dd-qcz4b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 kubectl -- exec busybox-769dd8b7dd-v6vks -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 kubectl -- exec busybox-769dd8b7dd-k7ks6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 kubectl -- exec busybox-769dd8b7dd-qcz4b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 kubectl -- exec busybox-769dd8b7dd-v6vks -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 kubectl -- exec busybox-769dd8b7dd-k7ks6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 kubectl -- exec busybox-769dd8b7dd-k7ks6 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 kubectl -- exec busybox-769dd8b7dd-qcz4b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 kubectl -- exec busybox-769dd8b7dd-qcz4b -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 kubectl -- exec busybox-769dd8b7dd-v6vks -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 kubectl -- exec busybox-769dd8b7dd-v6vks -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (29.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 node add --alsologtostderr -v 5
E1227 20:05:37.972783  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-422549 node add --alsologtostderr -v 5: (28.724558334s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (29.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-422549 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.03265026s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-422549 status --output json --alsologtostderr -v 5: (1.011262599s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 cp testdata/cp-test.txt ha-422549:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 cp ha-422549:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3848759327/001/cp-test_ha-422549.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 cp ha-422549:/home/docker/cp-test.txt ha-422549-m02:/home/docker/cp-test_ha-422549_ha-422549-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m02 "sudo cat /home/docker/cp-test_ha-422549_ha-422549-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 cp ha-422549:/home/docker/cp-test.txt ha-422549-m03:/home/docker/cp-test_ha-422549_ha-422549-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m03 "sudo cat /home/docker/cp-test_ha-422549_ha-422549-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 cp ha-422549:/home/docker/cp-test.txt ha-422549-m04:/home/docker/cp-test_ha-422549_ha-422549-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m04 "sudo cat /home/docker/cp-test_ha-422549_ha-422549-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 cp testdata/cp-test.txt ha-422549-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 cp ha-422549-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3848759327/001/cp-test_ha-422549-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 cp ha-422549-m02:/home/docker/cp-test.txt ha-422549:/home/docker/cp-test_ha-422549-m02_ha-422549.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549 "sudo cat /home/docker/cp-test_ha-422549-m02_ha-422549.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 cp ha-422549-m02:/home/docker/cp-test.txt ha-422549-m03:/home/docker/cp-test_ha-422549-m02_ha-422549-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m03 "sudo cat /home/docker/cp-test_ha-422549-m02_ha-422549-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 cp ha-422549-m02:/home/docker/cp-test.txt ha-422549-m04:/home/docker/cp-test_ha-422549-m02_ha-422549-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m04 "sudo cat /home/docker/cp-test_ha-422549-m02_ha-422549-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 cp testdata/cp-test.txt ha-422549-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 cp ha-422549-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3848759327/001/cp-test_ha-422549-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 cp ha-422549-m03:/home/docker/cp-test.txt ha-422549:/home/docker/cp-test_ha-422549-m03_ha-422549.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549 "sudo cat /home/docker/cp-test_ha-422549-m03_ha-422549.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 cp ha-422549-m03:/home/docker/cp-test.txt ha-422549-m02:/home/docker/cp-test_ha-422549-m03_ha-422549-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m02 "sudo cat /home/docker/cp-test_ha-422549-m03_ha-422549-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 cp ha-422549-m03:/home/docker/cp-test.txt ha-422549-m04:/home/docker/cp-test_ha-422549-m03_ha-422549-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m04 "sudo cat /home/docker/cp-test_ha-422549-m03_ha-422549-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 cp testdata/cp-test.txt ha-422549-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3848759327/001/cp-test_ha-422549-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549:/home/docker/cp-test_ha-422549-m04_ha-422549.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549 "sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549-m02:/home/docker/cp-test_ha-422549-m04_ha-422549-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m02 "sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 cp ha-422549-m04:/home/docker/cp-test.txt ha-422549-m03:/home/docker/cp-test_ha-422549-m04_ha-422549-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 ssh -n ha-422549-m03 "sudo cat /home/docker/cp-test_ha-422549-m04_ha-422549-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-422549 node stop m02 --alsologtostderr -v 5: (12.028001583s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5: exit status 7 (763.980067ms)

                                                
                                                
-- stdout --
	ha-422549
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-422549-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-422549-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-422549-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:06:21.206512  315661 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:06:21.206693  315661 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:06:21.206705  315661 out.go:374] Setting ErrFile to fd 2...
	I1227 20:06:21.206711  315661 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:06:21.206957  315661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:06:21.207156  315661 out.go:368] Setting JSON to false
	I1227 20:06:21.207201  315661 mustload.go:66] Loading cluster: ha-422549
	I1227 20:06:21.207257  315661 notify.go:221] Checking for updates...
	I1227 20:06:21.207606  315661 config.go:182] Loaded profile config "ha-422549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:06:21.207624  315661 status.go:174] checking status of ha-422549 ...
	I1227 20:06:21.208132  315661 cli_runner.go:164] Run: docker container inspect ha-422549 --format={{.State.Status}}
	I1227 20:06:21.227883  315661 status.go:371] ha-422549 host status = "Running" (err=<nil>)
	I1227 20:06:21.227908  315661 host.go:66] Checking if "ha-422549" exists ...
	I1227 20:06:21.228205  315661 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549
	I1227 20:06:21.267108  315661 host.go:66] Checking if "ha-422549" exists ...
	I1227 20:06:21.267445  315661 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:06:21.267509  315661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549
	I1227 20:06:21.287994  315661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549/id_rsa Username:docker}
	I1227 20:06:21.387398  315661 ssh_runner.go:195] Run: systemctl --version
	I1227 20:06:21.393758  315661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:06:21.406963  315661 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:06:21.465014  315661 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-27 20:06:21.454951488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:06:21.465582  315661 kubeconfig.go:125] found "ha-422549" server: "https://192.168.49.254:8443"
	I1227 20:06:21.465619  315661 api_server.go:166] Checking apiserver status ...
	I1227 20:06:21.465661  315661 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:06:21.477717  315661 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup
	I1227 20:06:21.486962  315661 api_server.go:192] apiserver freezer: "4:freezer:/docker/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/crio/crio-8c3144c49d56462a5a853edddaca9a64d5dc531268102d3eca8ea6ed215d67be"
	I1227 20:06:21.487036  315661 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/53fd780c3df58a5a4bdaea847d0b5e64fb2d47e60303a4fd37bc512a567bbbcf/crio/crio-8c3144c49d56462a5a853edddaca9a64d5dc531268102d3eca8ea6ed215d67be/freezer.state
	I1227 20:06:21.496430  315661 api_server.go:214] freezer state: "THAWED"
	I1227 20:06:21.496459  315661 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1227 20:06:21.506629  315661 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1227 20:06:21.506660  315661 status.go:463] ha-422549 apiserver status = Running (err=<nil>)
	I1227 20:06:21.506671  315661 status.go:176] ha-422549 status: &{Name:ha-422549 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:06:21.506688  315661 status.go:174] checking status of ha-422549-m02 ...
	I1227 20:06:21.507007  315661 cli_runner.go:164] Run: docker container inspect ha-422549-m02 --format={{.State.Status}}
	I1227 20:06:21.525905  315661 status.go:371] ha-422549-m02 host status = "Stopped" (err=<nil>)
	I1227 20:06:21.525929  315661 status.go:384] host is not running, skipping remaining checks
	I1227 20:06:21.525937  315661 status.go:176] ha-422549-m02 status: &{Name:ha-422549-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:06:21.525957  315661 status.go:174] checking status of ha-422549-m03 ...
	I1227 20:06:21.526278  315661 cli_runner.go:164] Run: docker container inspect ha-422549-m03 --format={{.State.Status}}
	I1227 20:06:21.544338  315661 status.go:371] ha-422549-m03 host status = "Running" (err=<nil>)
	I1227 20:06:21.544362  315661 host.go:66] Checking if "ha-422549-m03" exists ...
	I1227 20:06:21.544689  315661 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m03
	I1227 20:06:21.566923  315661 host.go:66] Checking if "ha-422549-m03" exists ...
	I1227 20:06:21.567252  315661 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:06:21.567309  315661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m03
	I1227 20:06:21.586324  315661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m03/id_rsa Username:docker}
	I1227 20:06:21.691951  315661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:06:21.706589  315661 kubeconfig.go:125] found "ha-422549" server: "https://192.168.49.254:8443"
	I1227 20:06:21.706619  315661 api_server.go:166] Checking apiserver status ...
	I1227 20:06:21.706660  315661 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:06:21.717750  315661 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1203/cgroup
	I1227 20:06:21.726164  315661 api_server.go:192] apiserver freezer: "4:freezer:/docker/7104413e49e7c9310c8ab21471f7a9998232779caa1a26091a2656e33c816c5f/crio/crio-0c761b00e397c74068c073fe2c54d6c920cbfb58d33410d99ca14dd3b23b8691"
	I1227 20:06:21.726248  315661 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7104413e49e7c9310c8ab21471f7a9998232779caa1a26091a2656e33c816c5f/crio/crio-0c761b00e397c74068c073fe2c54d6c920cbfb58d33410d99ca14dd3b23b8691/freezer.state
	I1227 20:06:21.734835  315661 api_server.go:214] freezer state: "THAWED"
	I1227 20:06:21.734912  315661 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1227 20:06:21.743338  315661 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1227 20:06:21.743369  315661 status.go:463] ha-422549-m03 apiserver status = Running (err=<nil>)
	I1227 20:06:21.743389  315661 status.go:176] ha-422549-m03 status: &{Name:ha-422549-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:06:21.743406  315661 status.go:174] checking status of ha-422549-m04 ...
	I1227 20:06:21.743727  315661 cli_runner.go:164] Run: docker container inspect ha-422549-m04 --format={{.State.Status}}
	I1227 20:06:21.763449  315661 status.go:371] ha-422549-m04 host status = "Running" (err=<nil>)
	I1227 20:06:21.763476  315661 host.go:66] Checking if "ha-422549-m04" exists ...
	I1227 20:06:21.763798  315661 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-422549-m04
	I1227 20:06:21.781999  315661 host.go:66] Checking if "ha-422549-m04" exists ...
	I1227 20:06:21.782332  315661 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:06:21.782376  315661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-422549-m04
	I1227 20:06:21.800343  315661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/ha-422549-m04/id_rsa Username:docker}
	I1227 20:06:21.898776  315661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:06:21.911942  315661 status.go:176] ha-422549-m04 status: &{Name:ha-422549-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (21.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-422549 node start m02 --alsologtostderr -v 5: (19.842389289s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-422549 status --alsologtostderr -v 5: (1.219420772s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (21.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.257368408s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.26s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.63s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-809975 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1227 20:19:17.181588  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-809975 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (44.623297515s)
--- PASS: TestJSONOutput/start/Command (44.63s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-809975 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-809975 --output=json --user=testUser: (5.880192651s)
--- PASS: TestJSONOutput/stop/Command (5.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-808402 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-808402 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (96.788722ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bcbaafb3-bd84-4e23-b972-5652292a121b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-808402] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"83daae8b-59df-4dab-b006-80813c5d43a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22332"}}
	{"specversion":"1.0","id":"7036b00b-fa24-46d5-b3f4-9967aee79d7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ab161ddb-354f-4c05-9499-d232c1b3ce47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig"}}
	{"specversion":"1.0","id":"1e842898-f7bb-4a48-9377-2610413f02bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube"}}
	{"specversion":"1.0","id":"839e57c0-c695-4f09-af98-f7cf563d2b40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"aab7b56f-348b-46cb-8147-17e29dd32169","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d0181082-02e5-422a-9eaa-da941a25059b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-808402" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-808402
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.28s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-516603 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-516603 --network=: (33.03980838s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-516603" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-516603
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-516603: (2.2183268s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.28s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (28.64s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-034716 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-034716 --network=bridge: (26.506982303s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-034716" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-034716
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-034716: (2.102221737s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (28.64s)

                                                
                                    
x
+
TestKicExistingNetwork (29.16s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1227 20:20:59.667689  274336 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1227 20:20:59.683395  274336 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1227 20:20:59.684447  274336 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1227 20:20:59.684493  274336 cli_runner.go:164] Run: docker network inspect existing-network
W1227 20:20:59.700012  274336 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1227 20:20:59.700044  274336 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1227 20:20:59.700060  274336 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1227 20:20:59.700161  274336 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 20:20:59.717472  274336 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9521cb9225c5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:1d:ef:38:b7:a6} reservation:<nil>}
I1227 20:20:59.717831  274336 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b71390}
I1227 20:20:59.717860  274336 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1227 20:20:59.717913  274336 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1227 20:20:59.780535  274336 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-557371 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-557371 --network=existing-network: (26.9483201s)
helpers_test.go:176: Cleaning up "existing-network-557371" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-557371
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-557371: (2.065000225s)
I1227 20:21:28.809936  274336 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (29.16s)

                                                
                                    
x
+
TestKicCustomSubnet (30.29s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-130106 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-130106 --subnet=192.168.60.0/24: (28.142999455s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-130106 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-130106" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-130106
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-130106: (2.120587854s)
--- PASS: TestKicCustomSubnet (30.29s)

                                                
                                    
x
+
TestKicStaticIP (29.59s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-572244 --static-ip=192.168.200.200
E1227 20:22:13.975021  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-572244 --static-ip=192.168.200.200: (27.28190322s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-572244 ip
helpers_test.go:176: Cleaning up "static-ip-572244" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-572244
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-572244: (2.151152784s)
--- PASS: TestKicStaticIP (29.59s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (61.18s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-007520 --driver=docker  --container-runtime=crio
E1227 20:22:54.130298  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-007520 --driver=docker  --container-runtime=crio: (27.983229938s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-010039 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-010039 --driver=docker  --container-runtime=crio: (27.201703831s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-007520
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-010039
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-010039" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-010039
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-010039: (2.076123408s)
helpers_test.go:176: Cleaning up "first-007520" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-007520
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-007520: (2.393155927s)
--- PASS: TestMinikubeProfile (61.18s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-105005 --memory=3072 --mount-string /tmp/TestMountStartserial3775597235/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1227 20:23:37.013598  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-105005 --memory=3072 --mount-string /tmp/TestMountStartserial3775597235/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.900913558s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-105005 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-106769 --memory=3072 --mount-string /tmp/TestMountStartserial3775597235/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-106769 --memory=3072 --mount-string /tmp/TestMountStartserial3775597235/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.199196893s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-106769 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-105005 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-105005 --alsologtostderr -v=5: (1.688639207s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-106769 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-106769
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-106769: (1.287049213s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.93s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-106769
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-106769: (6.934737602s)
--- PASS: TestMountStart/serial/RestartStopped (7.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-106769 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (73.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-458368 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-458368 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m13.234026156s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (73.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-458368 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-458368 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-458368 -- rollout status deployment/busybox: (3.342532961s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-458368 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-458368 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-458368 -- exec busybox-769dd8b7dd-dz2v6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-458368 -- exec busybox-769dd8b7dd-gff4z -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-458368 -- exec busybox-769dd8b7dd-dz2v6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-458368 -- exec busybox-769dd8b7dd-gff4z -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-458368 -- exec busybox-769dd8b7dd-dz2v6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-458368 -- exec busybox-769dd8b7dd-gff4z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.01s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-458368 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-458368 -- exec busybox-769dd8b7dd-dz2v6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-458368 -- exec busybox-769dd8b7dd-dz2v6 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-458368 -- exec busybox-769dd8b7dd-gff4z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-458368 -- exec busybox-769dd8b7dd-gff4z -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-458368 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-458368 -v=5 --alsologtostderr: (27.191857259s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.90s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-458368 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 cp testdata/cp-test.txt multinode-458368:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 ssh -n multinode-458368 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 cp multinode-458368:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4081699471/001/cp-test_multinode-458368.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 ssh -n multinode-458368 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 cp multinode-458368:/home/docker/cp-test.txt multinode-458368-m02:/home/docker/cp-test_multinode-458368_multinode-458368-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 ssh -n multinode-458368 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 ssh -n multinode-458368-m02 "sudo cat /home/docker/cp-test_multinode-458368_multinode-458368-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 cp multinode-458368:/home/docker/cp-test.txt multinode-458368-m03:/home/docker/cp-test_multinode-458368_multinode-458368-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 ssh -n multinode-458368 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 ssh -n multinode-458368-m03 "sudo cat /home/docker/cp-test_multinode-458368_multinode-458368-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 cp testdata/cp-test.txt multinode-458368-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 ssh -n multinode-458368-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 cp multinode-458368-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4081699471/001/cp-test_multinode-458368-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 ssh -n multinode-458368-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 cp multinode-458368-m02:/home/docker/cp-test.txt multinode-458368:/home/docker/cp-test_multinode-458368-m02_multinode-458368.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 ssh -n multinode-458368-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 ssh -n multinode-458368 "sudo cat /home/docker/cp-test_multinode-458368-m02_multinode-458368.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 cp multinode-458368-m02:/home/docker/cp-test.txt multinode-458368-m03:/home/docker/cp-test_multinode-458368-m02_multinode-458368-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 ssh -n multinode-458368-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 ssh -n multinode-458368-m03 "sudo cat /home/docker/cp-test_multinode-458368-m02_multinode-458368-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 cp testdata/cp-test.txt multinode-458368-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 ssh -n multinode-458368-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 cp multinode-458368-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4081699471/001/cp-test_multinode-458368-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 ssh -n multinode-458368-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 cp multinode-458368-m03:/home/docker/cp-test.txt multinode-458368:/home/docker/cp-test_multinode-458368-m03_multinode-458368.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 ssh -n multinode-458368-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 ssh -n multinode-458368 "sudo cat /home/docker/cp-test_multinode-458368-m03_multinode-458368.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 cp multinode-458368-m03:/home/docker/cp-test.txt multinode-458368-m02:/home/docker/cp-test_multinode-458368-m03_multinode-458368-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 ssh -n multinode-458368-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 ssh -n multinode-458368-m02 "sudo cat /home/docker/cp-test_multinode-458368-m03_multinode-458368-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-458368 node stop m03: (1.317363397s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-458368 status: exit status 7 (528.6762ms)

                                                
                                                
-- stdout --
	multinode-458368
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-458368-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-458368-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-458368 status --alsologtostderr: exit status 7 (554.727335ms)

                                                
                                                
-- stdout --
	multinode-458368
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-458368-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-458368-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:26:02.571304  391761 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:26:02.571414  391761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:26:02.571424  391761 out.go:374] Setting ErrFile to fd 2...
	I1227 20:26:02.571429  391761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:26:02.571673  391761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:26:02.571871  391761 out.go:368] Setting JSON to false
	I1227 20:26:02.571913  391761 mustload.go:66] Loading cluster: multinode-458368
	I1227 20:26:02.571982  391761 notify.go:221] Checking for updates...
	I1227 20:26:02.573618  391761 config.go:182] Loaded profile config "multinode-458368": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:26:02.573643  391761 status.go:174] checking status of multinode-458368 ...
	I1227 20:26:02.576042  391761 cli_runner.go:164] Run: docker container inspect multinode-458368 --format={{.State.Status}}
	I1227 20:26:02.593618  391761 status.go:371] multinode-458368 host status = "Running" (err=<nil>)
	I1227 20:26:02.593646  391761 host.go:66] Checking if "multinode-458368" exists ...
	I1227 20:26:02.593964  391761 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-458368
	I1227 20:26:02.627480  391761 host.go:66] Checking if "multinode-458368" exists ...
	I1227 20:26:02.627807  391761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:26:02.627850  391761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-458368
	I1227 20:26:02.647867  391761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33263 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/multinode-458368/id_rsa Username:docker}
	I1227 20:26:02.750806  391761 ssh_runner.go:195] Run: systemctl --version
	I1227 20:26:02.757251  391761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:26:02.772376  391761 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:26:02.836838  391761 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-27 20:26:02.82622572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:26:02.837422  391761 kubeconfig.go:125] found "multinode-458368" server: "https://192.168.67.2:8443"
	I1227 20:26:02.837594  391761 api_server.go:166] Checking apiserver status ...
	I1227 20:26:02.837655  391761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:26:02.849141  391761 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1224/cgroup
	I1227 20:26:02.857495  391761 api_server.go:192] apiserver freezer: "4:freezer:/docker/f38e79a2b9840b69420f05a159a20bf43c2516989eedf25929fe73ee3d92dd26/crio/crio-6dd27da270f95716d15b59b323718b6b951d8e2b78eab8c096f9f8358439b6f7"
	I1227 20:26:02.857567  391761 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f38e79a2b9840b69420f05a159a20bf43c2516989eedf25929fe73ee3d92dd26/crio/crio-6dd27da270f95716d15b59b323718b6b951d8e2b78eab8c096f9f8358439b6f7/freezer.state
	I1227 20:26:02.865388  391761 api_server.go:214] freezer state: "THAWED"
	I1227 20:26:02.865423  391761 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1227 20:26:02.873889  391761 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1227 20:26:02.873970  391761 status.go:463] multinode-458368 apiserver status = Running (err=<nil>)
	I1227 20:26:02.873987  391761 status.go:176] multinode-458368 status: &{Name:multinode-458368 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:26:02.874006  391761 status.go:174] checking status of multinode-458368-m02 ...
	I1227 20:26:02.874327  391761 cli_runner.go:164] Run: docker container inspect multinode-458368-m02 --format={{.State.Status}}
	I1227 20:26:02.891619  391761 status.go:371] multinode-458368-m02 host status = "Running" (err=<nil>)
	I1227 20:26:02.891647  391761 host.go:66] Checking if "multinode-458368-m02" exists ...
	I1227 20:26:02.891964  391761 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-458368-m02
	I1227 20:26:02.909237  391761 host.go:66] Checking if "multinode-458368-m02" exists ...
	I1227 20:26:02.909695  391761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:26:02.909746  391761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-458368-m02
	I1227 20:26:02.930300  391761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33268 SSHKeyPath:/home/jenkins/minikube-integration/22332-272475/.minikube/machines/multinode-458368-m02/id_rsa Username:docker}
	I1227 20:26:03.031433  391761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:26:03.044901  391761 status.go:176] multinode-458368-m02 status: &{Name:multinode-458368-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:26:03.044942  391761 status.go:174] checking status of multinode-458368-m03 ...
	I1227 20:26:03.045302  391761 cli_runner.go:164] Run: docker container inspect multinode-458368-m03 --format={{.State.Status}}
	I1227 20:26:03.063339  391761 status.go:371] multinode-458368-m03 host status = "Stopped" (err=<nil>)
	I1227 20:26:03.063369  391761 status.go:384] host is not running, skipping remaining checks
	I1227 20:26:03.063391  391761 status.go:176] multinode-458368-m03 status: &{Name:multinode-458368-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-458368 node start m03 -v=5 --alsologtostderr: (7.629439842s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-458368
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-458368
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-458368: (25.050292636s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-458368 --wait=true -v=5 --alsologtostderr
E1227 20:27:13.966613  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-458368 --wait=true -v=5 --alsologtostderr: (55.399263442s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-458368
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-458368 node delete m03: (4.914309862s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 stop
E1227 20:27:54.129572  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-458368 stop: (23.895028816s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-458368 status: exit status 7 (110.169626ms)

                                                
                                                
-- stdout --
	multinode-458368
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-458368-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-458368 status --alsologtostderr: exit status 7 (117.203564ms)

                                                
                                                
-- stdout --
	multinode-458368
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-458368-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:28:01.684100  399620 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:28:01.684359  399620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:28:01.684391  399620 out.go:374] Setting ErrFile to fd 2...
	I1227 20:28:01.684510  399620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:28:01.684899  399620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:28:01.685243  399620 out.go:368] Setting JSON to false
	I1227 20:28:01.685311  399620 mustload.go:66] Loading cluster: multinode-458368
	I1227 20:28:01.685422  399620 notify.go:221] Checking for updates...
	I1227 20:28:01.685846  399620 config.go:182] Loaded profile config "multinode-458368": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:28:01.685889  399620 status.go:174] checking status of multinode-458368 ...
	I1227 20:28:01.686572  399620 cli_runner.go:164] Run: docker container inspect multinode-458368 --format={{.State.Status}}
	I1227 20:28:01.708249  399620 status.go:371] multinode-458368 host status = "Stopped" (err=<nil>)
	I1227 20:28:01.708273  399620 status.go:384] host is not running, skipping remaining checks
	I1227 20:28:01.708280  399620 status.go:176] multinode-458368 status: &{Name:multinode-458368 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:28:01.708316  399620 status.go:174] checking status of multinode-458368-m02 ...
	I1227 20:28:01.708676  399620 cli_runner.go:164] Run: docker container inspect multinode-458368-m02 --format={{.State.Status}}
	I1227 20:28:01.744113  399620 status.go:371] multinode-458368-m02 host status = "Stopped" (err=<nil>)
	I1227 20:28:01.744136  399620 status.go:384] host is not running, skipping remaining checks
	I1227 20:28:01.744144  399620 status.go:176] multinode-458368-m02 status: &{Name:multinode-458368-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-458368 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-458368 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (48.997325313s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-458368 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.74s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (29.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-458368
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-458368-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-458368-m02 --driver=docker  --container-runtime=crio: exit status 14 (87.264653ms)

                                                
                                                
-- stdout --
	* [multinode-458368-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-458368-m02' is duplicated with machine name 'multinode-458368-m02' in profile 'multinode-458368'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-458368-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-458368-m03 --driver=docker  --container-runtime=crio: (27.397727341s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-458368
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-458368: exit status 80 (336.776701ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-458368 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-458368-m03 already exists in multinode-458368-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-458368-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-458368-m03: (2.074144292s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (29.95s)

                                                
                                    
x
+
TestScheduledStopUnix (102.71s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-363352 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-363352 --memory=3072 --driver=docker  --container-runtime=crio: (26.668053937s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-363352 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 20:29:52.372897  408062 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:29:52.373203  408062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:52.373219  408062 out.go:374] Setting ErrFile to fd 2...
	I1227 20:29:52.373225  408062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:52.373719  408062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:29:52.374109  408062 out.go:368] Setting JSON to false
	I1227 20:29:52.374267  408062 mustload.go:66] Loading cluster: scheduled-stop-363352
	I1227 20:29:52.374972  408062 config.go:182] Loaded profile config "scheduled-stop-363352": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:52.375114  408062 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/scheduled-stop-363352/config.json ...
	I1227 20:29:52.375356  408062 mustload.go:66] Loading cluster: scheduled-stop-363352
	I1227 20:29:52.375539  408062 config.go:182] Loaded profile config "scheduled-stop-363352": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-363352 -n scheduled-stop-363352
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-363352 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 20:29:52.838832  408151 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:29:52.839021  408151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:52.839051  408151 out.go:374] Setting ErrFile to fd 2...
	I1227 20:29:52.839073  408151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:29:52.839349  408151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:29:52.839631  408151 out.go:368] Setting JSON to false
	I1227 20:29:52.840594  408151 daemonize_unix.go:73] killing process 408083 as it is an old scheduled stop
	I1227 20:29:52.844150  408151 mustload.go:66] Loading cluster: scheduled-stop-363352
	I1227 20:29:52.844602  408151 config.go:182] Loaded profile config "scheduled-stop-363352": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:29:52.844714  408151 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/scheduled-stop-363352/config.json ...
	I1227 20:29:52.844913  408151 mustload.go:66] Loading cluster: scheduled-stop-363352
	I1227 20:29:52.845067  408151 config.go:182] Loaded profile config "scheduled-stop-363352": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1227 20:29:52.850290  274336 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/scheduled-stop-363352/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-363352 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-363352 -n scheduled-stop-363352
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-363352
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-363352 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 20:30:18.747546  408558 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:30:18.747745  408558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:30:18.747773  408558 out.go:374] Setting ErrFile to fd 2...
	I1227 20:30:18.747793  408558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:30:18.748201  408558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:30:18.748591  408558 out.go:368] Setting JSON to false
	I1227 20:30:18.748735  408558 mustload.go:66] Loading cluster: scheduled-stop-363352
	I1227 20:30:18.749384  408558 config.go:182] Loaded profile config "scheduled-stop-363352": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:30:18.749589  408558 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/scheduled-stop-363352/config.json ...
	I1227 20:30:18.749835  408558 mustload.go:66] Loading cluster: scheduled-stop-363352
	I1227 20:30:18.750023  408558 config.go:182] Loaded profile config "scheduled-stop-363352": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-363352
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-363352: exit status 7 (70.717093ms)

                                                
                                                
-- stdout --
	scheduled-stop-363352
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-363352 -n scheduled-stop-363352
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-363352 -n scheduled-stop-363352: exit status 7 (60.67182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-363352" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-363352
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-363352: (4.471819814s)
--- PASS: TestScheduledStopUnix (102.71s)

                                                
                                    
x
+
TestInsufficientStorage (12.99s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-170209 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-170209 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.453996319s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"667d4f12-2106-41a8-ae3b-30cf417059f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-170209] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e93715da-e8c1-46e4-a8ac-6c03f0e53675","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22332"}}
	{"specversion":"1.0","id":"fc9a7e52-89b6-4200-b9aa-6f1ec7d63cb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e2a7d096-26d0-4747-842f-ea317d206865","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig"}}
	{"specversion":"1.0","id":"f4174bcf-e289-4b0b-8f0c-12b2c2a5f926","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube"}}
	{"specversion":"1.0","id":"45e6c9cd-221a-426e-8f1d-307aac00ab8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"59301549-db67-4b9e-acdd-ae1d5ee17911","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"32eaf8c1-c4bd-409d-993b-a693c59c41c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f7f97340-0bde-4dd7-8262-d12fcb7779a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e6ec3b4f-ab0a-4fc7-b4cf-5271132f3921","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5542aca2-edc0-4775-ac2c-7a0fba04b8d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c6adac33-a9a7-4aba-88c4-d9b194fd499f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-170209\" primary control-plane node in \"insufficient-storage-170209\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"34216ff4-02d1-4745-b0c8-af9f4b73a943","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766570851-22316 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0b7ae25d-ad21-4bfa-bd75-080ece44ec9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4872ac58-5668-4f6f-a78e-9021af5ff14d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-170209 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-170209 --output=json --layout=cluster: exit status 7 (294.378123ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-170209","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-170209","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 20:31:19.099662  410275 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-170209" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-170209 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-170209 --output=json --layout=cluster: exit status 7 (289.485793ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-170209","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-170209","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 20:31:19.391291  410342 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-170209" does not appear in /home/jenkins/minikube-integration/22332-272475/kubeconfig
	E1227 20:31:19.401162  410342 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/insufficient-storage-170209/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-170209" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-170209
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-170209: (1.947516846s)
--- PASS: TestInsufficientStorage (12.99s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (317.37s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3854485591 start -p running-upgrade-680512 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3854485591 start -p running-upgrade-680512 --memory=3072 --vm-driver=docker  --container-runtime=crio: (43.462265178s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-680512 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-680512 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.04933287s)
helpers_test.go:176: Cleaning up "running-upgrade-680512" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-680512
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-680512: (1.985506267s)
--- PASS: TestRunningBinaryUpgrade (317.37s)

                                                
                                    
x
+
TestKubernetesUpgrade (354.84s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-627202 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-627202 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.701186416s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-627202 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-627202 --alsologtostderr: (1.470949363s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-627202 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-627202 status --format={{.Host}}: exit status 7 (91.840847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-627202 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-627202 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m45.985730746s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-627202 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-627202 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-627202 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (111.689534ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-627202] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-627202
	    minikube start -p kubernetes-upgrade-627202 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6272022 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-627202 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-627202 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-627202 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.526810184s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-627202" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-627202
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-627202: (2.808357263s)
--- PASS: TestKubernetesUpgrade (354.84s)

                                                
                                    
x
+
TestMissingContainerUpgrade (116.15s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1296018470 start -p missing-upgrade-655901 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1296018470 start -p missing-upgrade-655901 --memory=3072 --driver=docker  --container-runtime=crio: (1m1.083535231s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-655901
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-655901
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-655901 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-655901 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (50.684956017s)
helpers_test.go:176: Cleaning up "missing-upgrade-655901" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-655901
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-655901: (2.404090524s)
--- PASS: TestMissingContainerUpgrade (116.15s)

                                                
                                    
x
+
TestPause/serial/Start (52.3s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-063268 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-063268 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (52.304804237s)
--- PASS: TestPause/serial/Start (52.30s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.16s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-063268 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1227 20:32:13.966736  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-063268 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.150440185s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (316.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3992388211 start -p stopped-upgrade-379864 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3992388211 start -p stopped-upgrade-379864 --memory=3072 --vm-driver=docker  --container-runtime=crio: (44.253139008s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3992388211 -p stopped-upgrade-379864 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3992388211 -p stopped-upgrade-379864 stop: (2.416678957s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-379864 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1227 20:35:57.182636  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:37:13.966978  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:37:54.129601  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-379864 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m29.603696941s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (316.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-379864
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-379864: (1.242938881s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (72.51s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-559240 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-559240 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (1m5.774334892s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-559240 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-559240
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-559240: (5.988296233s)
--- PASS: TestPreload/Start-NoPreload-PullImage (72.51s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (45.09s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-559240 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1227 20:40:17.013876  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-559240 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (44.842613514s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-559240 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (45.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-499448 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-499448 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (86.403694ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-499448] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (29.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-499448 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-499448 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (28.97419468s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-499448 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (29.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-499448 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-499448 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (14.809355494s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-499448 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-499448 status -o json: exit status 2 (307.406068ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-499448","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-499448
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-499448: (2.003082075s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-499448 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-499448 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.770810381s)
--- PASS: TestNoKubernetes/serial/Start (7.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22332-272475/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-499448 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-499448 "sudo systemctl is-active --quiet service kubelet": exit status 1 (265.181699ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-499448
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-499448: (1.285963441s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-499448 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-499448 --driver=docker  --container-runtime=crio: (7.181030206s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-499448 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-499448 "sudo systemctl is-active --quiet service kubelet": exit status 1 (725.2529ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-037975 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-037975 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (205.618185ms)

                                                
                                                
-- stdout --
	* [false-037975] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:45:06.198720  468493 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:45:06.198839  468493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:45:06.198851  468493 out.go:374] Setting ErrFile to fd 2...
	I1227 20:45:06.198857  468493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:45:06.199213  468493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-272475/.minikube/bin
	I1227 20:45:06.199695  468493 out.go:368] Setting JSON to false
	I1227 20:45:06.200564  468493 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8859,"bootTime":1766859448,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 20:45:06.200658  468493 start.go:143] virtualization:  
	I1227 20:45:06.204049  468493 out.go:179] * [false-037975] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:45:06.208634  468493 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:45:06.208821  468493 notify.go:221] Checking for updates...
	I1227 20:45:06.214779  468493 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:45:06.217776  468493 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-272475/kubeconfig
	I1227 20:45:06.220669  468493 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-272475/.minikube
	I1227 20:45:06.223993  468493 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:45:06.226980  468493 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:45:06.230337  468493 config.go:182] Loaded profile config "force-systemd-env-859716": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 20:45:06.230434  468493 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:45:06.271382  468493 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:45:06.271503  468493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:45:06.339039  468493 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:45:06.32810485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:45:06.339146  468493 docker.go:319] overlay module found
	I1227 20:45:06.342328  468493 out.go:179] * Using the docker driver based on user configuration
	I1227 20:45:06.345246  468493 start.go:309] selected driver: docker
	I1227 20:45:06.345280  468493 start.go:928] validating driver "docker" against <nil>
	I1227 20:45:06.345300  468493 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:45:06.349068  468493 out.go:203] 
	W1227 20:45:06.351922  468493 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1227 20:45:06.354732  468493 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-037975 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-037975

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-037975

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-037975

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-037975

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-037975

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-037975

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-037975

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-037975

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-037975

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-037975

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-037975

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-037975" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-037975" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-037975

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-037975"

                                                
                                                
----------------------- debugLogs end: false-037975 [took: 3.59498688s] --------------------------------
helpers_test.go:176: Cleaning up "false-037975" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-037975
--- PASS: TestNetworkPlugins/group/false (4.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (61.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-855707 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-855707 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m1.657952788s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (61.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-855707 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [9c2c639a-7368-4a9e-ad13-67a2e87b202b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [9c2c639a-7368-4a9e-ad13-67a2e87b202b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.006131027s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-855707 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-855707 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-855707 --alsologtostderr -v=3: (11.998265237s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-855707 -n old-k8s-version-855707
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-855707 -n old-k8s-version-855707: exit status 7 (64.014189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-855707 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (53.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-855707 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-855707 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (53.001031424s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-855707 -n old-k8s-version-855707
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (53.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-77hv8" [a193555c-c195-4ba4-8eb9-c4c8e4a915df] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007142237s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-77hv8" [a193555c-c195-4ba4-8eb9-c4c8e4a915df] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003346293s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-855707 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-855707 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-058924 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E1227 20:52:37.183645  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:52:54.130212  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-058924 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (46.713751003s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-058924 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [064bee55-c240-4433-bde1-87acf5ac8840] Pending
helpers_test.go:353: "busybox" [064bee55-c240-4433-bde1-87acf5ac8840] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [064bee55-c240-4433-bde1-87acf5ac8840] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003197645s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-058924 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-058924 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-058924 --alsologtostderr -v=3: (12.019099093s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-058924 -n default-k8s-diff-port-058924
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-058924 -n default-k8s-diff-port-058924: exit status 7 (67.336318ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-058924 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-058924 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-058924 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (55.241928671s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-058924 -n default-k8s-diff-port-058924
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-l99xd" [e5ca3604-3482-491b-a609-dffc6a623f6d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003483824s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-l99xd" [e5ca3604-3482-491b-a609-dffc6a623f6d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005381186s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-058924 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-058924 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (41.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-193865 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-193865 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (41.697146585s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (41.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-193865 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [5e4582d7-6a89-4582-a1c2-98e78bb9f0d2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [5e4582d7-6a89-4582-a1c2-98e78bb9f0d2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.002968116s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-193865 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-193865 --alsologtostderr -v=3
E1227 20:55:44.925575  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:55:44.930857  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:55:44.941324  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:55:44.961676  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:55:45.001940  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:55:45.082337  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:55:45.243186  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:55:45.563788  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:55:46.204770  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:55:47.485320  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:55:50.045616  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-193865 --alsologtostderr -v=3: (11.996675315s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-193865 -n embed-certs-193865
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-193865 -n embed-certs-193865: exit status 7 (77.988834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-193865 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-193865 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E1227 20:55:55.166800  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:56:05.407106  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:56:25.887349  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-193865 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (50.584647502s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-193865 -n embed-certs-193865
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-44qk4" [23bb16cd-0858-4438-ac8e-4da39cee9c7b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002904223s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-44qk4" [23bb16cd-0858-4438-ac8e-4da39cee9c7b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003071091s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-193865 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-193865 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (59.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-542467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E1227 20:57:06.847628  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:57:13.966661  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-542467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (59.580934409s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (59.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-549946 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E1227 20:57:54.129364  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-549946 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (34.404357134s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-542467 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [839817a5-386c-47cf-acc7-77e328ee53be] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [839817a5-386c-47cf-acc7-77e328ee53be] Running
E1227 20:58:08.267855  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:08.273187  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:08.283473  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:08.303790  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:08.344077  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:08.424436  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:08.584942  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:08.905487  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:09.546253  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:10.826662  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004234482s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-542467 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-549946 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-549946 --alsologtostderr -v=3: (1.512445915s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-549946 -n newest-cni-549946
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-549946 -n newest-cni-549946: exit status 7 (72.046774ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-549946 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-549946 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-549946 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (13.825756634s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-549946 -n newest-cni-549946
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-542467 --alsologtostderr -v=3
E1227 20:58:18.507822  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-542467 --alsologtostderr -v=3: (12.427503152s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-549946 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (5.36s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-038558 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
E1227 20:58:28.747987  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:28.768211  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-038558 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (5.171452152s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-038558" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-038558
--- PASS: TestPreload/PreloadSrc/gcs (5.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-542467 -n no-preload-542467
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-542467 -n no-preload-542467: exit status 7 (78.370118ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-542467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-542467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-542467 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (51.945450342s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-542467 -n no-preload-542467
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.31s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (7.07s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-371459 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-371459 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (6.773350641s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-371459" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-371459
--- PASS: TestPreload/PreloadSrc/github (7.07s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.64s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-876907 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-876907" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-876907
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-037975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1227 20:58:49.228539  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-037975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (48.001374076s)
--- PASS: TestNetworkPlugins/group/auto/Start (48.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-mhlrk" [ae71b676-51bb-4d41-9c79-2548bd9061e0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003799836s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-mhlrk" [ae71b676-51bb-4d41-9c79-2548bd9061e0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003180988s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-542467 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-037975 "pgrep -a kubelet"
I1227 20:59:29.351733  274336 config.go:182] Loaded profile config "auto-037975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-037975 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-7bcc4" [e9a49404-a2bf-47da-ac54-41c68631c6c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1227 20:59:30.189609  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-7bcc4" [e9a49404-a2bf-47da-ac54-41c68631c6c7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.006053741s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-542467 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-037975 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-037975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-037975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (49.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-037975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-037975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (49.361012948s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (49.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (62.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-037975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-037975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m2.292940303s)
--- PASS: TestNetworkPlugins/group/calico/Start (62.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-wkr4s" [4101ccf6-7b1d-4812-9872-1eeabc314932] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005059979s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-037975 "pgrep -a kubelet"
I1227 21:00:38.444970  274336 config.go:182] Loaded profile config "kindnet-037975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-037975 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-n64cp" [57e95c95-4953-47d9-ba4f-c5b784d40a3f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-n64cp" [57e95c95-4953-47d9-ba4f-c5b784d40a3f] Running
E1227 21:00:44.925542  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/old-k8s-version-855707/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.00381208s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-037975 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-037975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-037975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-85sff" [126b3511-ae01-440d-bc38-4151ba3a8a8a] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-85sff" [126b3511-ae01-440d-bc38-4151ba3a8a8a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004236186s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-037975 "pgrep -a kubelet"
I1227 21:01:13.463314  274336 config.go:182] Loaded profile config "calico-037975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-037975 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-prhmm" [e9ab7182-b939-4b7d-a5b4-e6740c50435c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-prhmm" [e9ab7182-b939-4b7d-a5b4-e6740c50435c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.006498173s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-037975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-037975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (55.094160624s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-037975 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-037975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-037975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (65.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-037975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-037975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m5.455850925s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (65.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-037975 "pgrep -a kubelet"
I1227 21:02:09.284704  274336 config.go:182] Loaded profile config "custom-flannel-037975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-037975 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-8vvpp" [29dbfe0f-36e8-4df1-902a-b226bccff61d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1227 21:02:13.966629  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/functional-425652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-8vvpp" [29dbfe0f-36e8-4df1-902a-b226bccff61d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.002532563s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-037975 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-037975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-037975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-037975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1227 21:02:54.130223  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/addons-686526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-037975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (53.775343619s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-037975 "pgrep -a kubelet"
I1227 21:02:59.130779  274336 config.go:182] Loaded profile config "enable-default-cni-037975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-037975 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-kpsbm" [716e7e07-e3c2-4b1c-9ac2-2808e7a9c8ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1227 21:03:03.878047  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:03:03.883465  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:03:03.893727  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:03:03.913984  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:03:03.954251  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:03:04.034721  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:03:04.195235  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:03:04.515717  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:03:05.156590  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-kpsbm" [716e7e07-e3c2-4b1c-9ac2-2808e7a9c8ac] Running
E1227 21:03:06.437103  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:03:08.268468  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 21:03:08.997748  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003837311s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-037975 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-037975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-037975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-037975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1227 21:03:35.951055  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/default-k8s-diff-port-058924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-037975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m7.223009493s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-hn27h" [0235a194-dc5c-41c7-8e17-4a04efc7c82e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004213842s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-037975 "pgrep -a kubelet"
I1227 21:03:43.541371  274336 config.go:182] Loaded profile config "flannel-037975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-037975 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-gwc2r" [92d976f3-8bc1-442e-be24-54dc58a18e87] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1227 21:03:44.839296  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/no-preload-542467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-gwc2r" [92d976f3-8bc1-442e-be24-54dc58a18e87] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003657825s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-037975 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-037975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-037975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-037975 "pgrep -a kubelet"
I1227 21:04:39.681404  274336 config.go:182] Loaded profile config "bridge-037975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-037975 replace --force -f testdata/netcat-deployment.yaml
E1227 21:04:39.844964  274336 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-272475/.minikube/profiles/auto-037975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-xdvrl" [04acdaea-f048-4350-9958-bd85a8e2c21b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-xdvrl" [04acdaea-f048-4350-9958-bd85a8e2c21b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003443557s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-037975 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-037975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-037975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.41s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-559752 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-559752" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-559752
--- SKIP: TestDownloadOnlyKic (0.41s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1797: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-371621" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-371621
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-037975 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-037975

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-037975

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-037975

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-037975

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-037975

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-037975

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-037975

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-037975

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-037975

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-037975

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-037975

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-037975" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-037975" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-037975

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-037975"

                                                
                                                
----------------------- debugLogs end: kubenet-037975 [took: 3.257547603s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-037975" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-037975
--- SKIP: TestNetworkPlugins/group/kubenet (3.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-037975 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-037975

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-037975

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-037975

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-037975

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-037975

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-037975

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-037975

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-037975

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-037975

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-037975

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-037975

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-037975" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-037975

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-037975

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-037975

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-037975

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-037975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-037975" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-037975

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-037975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-037975"

                                                
                                                
----------------------- debugLogs end: cilium-037975 [took: 3.729241458s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-037975" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-037975
--- SKIP: TestNetworkPlugins/group/cilium (3.88s)

                                                
                                    
Copied to clipboard